ICEpush
  1. ICEpush
  2. PUSH-315

Push doesn't idle when running in a portal

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: EE-3.3.0.GA_P01, 4.0.BETA
    • Fix Version/s: EE-3.3.0.GA_P02, 4.0
    • Component/s: Push Library
    • Labels:
      None
    • Environment:
      Portal portlet Liferay 6.2 Push

      Description

      When testing the chat-portlet example on Liferay, I noticed a problem with Push in that when the page is left to idle, the push connection breaks.

      When running the corresponding chat sample as a plain servlet app on Tomcat, I don't see the same behaviour.

        Activity

        Hide
        Deryk Sinotte added a comment -

        So I found that a couple of tweaks to the FixedSizeContentHandler.respond() method seemed to fix the problem:

        • removed the flush call from the StringWriter
        • added a flush call to the output stream after writing the content
        • disabled the setting of the Content-Length header
            public void respond(Response response) throws Exception {
                StringWriter writer = new StringWriter();
                writeTo(writer);
                writer.write("\n\n");
                byte[] content = writer.getBuffer().toString().getBytes(characterSet);
                response.setHeader("Content-Type", mimeType + "; charset=" + characterSet);
        //        response.setHeader("Content-Length", content.length);
                response.writeBody().write(content);
                response.writeBody().flush();
            }

        Just not sure whether or not to disable the setting of the Content-Length in general or just for Liferay 6.2.

        Show
        Deryk Sinotte added a comment - So I found that a couple of tweaks to the FixedSizeContentHandler.respond() method seemed to fix the problem: removed the flush call from the StringWriter added a flush call to the output stream after writing the content disabled the setting of the Content-Length header public void respond(Response response) throws Exception { StringWriter writer = new StringWriter(); writeTo(writer); writer.write( "\n\n" ); byte [] content = writer.getBuffer().toString().getBytes(characterSet); response.setHeader( "Content-Type" , mimeType + "; charset=" + characterSet); // response.setHeader( "Content-Length" , content.length); response.writeBody().write(content); response.writeBody().flush(); } Just not sure whether or not to disable the setting of the Content-Length in general or just for Liferay 6.2.
        Hide
        Deryk Sinotte added a comment -

        I've adjusted and tested the code on my end. The changes have been checked into:

        ./ossrepo/icefaces4/trunk/icefaces
        ./ossrepo/icefaces-ee/branches/icefaces-ee-3.3.0.GA-maintenance/icefaces
        ./ossrepo/icefaces-ee/tags/icepush-core-ee-3.3.0.GA_P02/icepush

        Show
        Deryk Sinotte added a comment - I've adjusted and tested the code on my end. The changes have been checked into: ./ossrepo/icefaces4/trunk/icefaces ./ossrepo/icefaces-ee/branches/icefaces-ee-3.3.0.GA-maintenance/icefaces ./ossrepo/icefaces-ee/tags/icepush-core-ee-3.3.0.GA_P02/icepush
        Hide
        Jack Van Ooststroom added a comment -

        I confirmed Deryk's fix on my end as well. I deployed chat-portlet to 6.2 and let it sit there after logging in for a while. The blocking connection seems healthy to me. Both <browser id="..."/> and <noop/> responses were not being truncated anymore.

        Marking this one as FIXED.

        Show
        Jack Van Ooststroom added a comment - I confirmed Deryk's fix on my end as well. I deployed chat-portlet to 6.2 and let it sit there after logging in for a while. The blocking connection seems healthy to me. Both <browser id="..."/> and <noop/> responses were not being truncated anymore. Marking this one as FIXED.
        Hide
        Deryk Sinotte added a comment -

        Testing on Wildfly 8.0.0 showed the following issue:

        Server warning seen occasionally when navigating between demos, not able to reliably reproduce, no applications seem affected.
        
        13:15:33,564 WARNING [org.icepush.BlockingConnectionServer] (Monitoring scheduler) Exception caught on org.icepush.BlockingConnectionServer TimerTask.: java.lang.RuntimeException: java.io.IOException: UT000029: Channel was closed mid chunk, if you have attempted to write chunked data you cannot shutdown the channel until after it has all been written.
                at org.icepush.BlockingConnectionServer.respondIfPendingRequest(BlockingConnectionServer.java:180) [icepush-ee.jar:]
                at org.icepush.BlockingConnectionServer.run(BlockingConnectionServer.java:109) [icepush-ee.jar:]
                at java.util.TimerThread.mainLoop(Timer.java:555) [rt.jar:1.7.0_07]
                at java.util.TimerThread.run(Timer.java:505) [rt.jar:1.7.0_07]
        Caused by: java.io.IOException: UT000029: Channel was closed mid chunk, if you have attempted to write chunked data you cannot shutdown the channel until after it has all been written.
                at io.undertow.conduits.ChunkedStreamSinkConduit.terminateWrites(ChunkedStreamSinkConduit.java:283) [undertow-core-1.0.0.Final.jar:1.0.0.Final]
                at org.xnio.conduits.ConduitStreamSinkChannel.shutdownWrites(ConduitStreamSinkChannel.java:178)
                at io.undertow.channels.DetachableStreamSinkChannel.shutdownWrites(DetachableStreamSinkChannel.java:60) [undertow-core-1.0.0.Final.jar:1.0.0.Final]
                at io.undertow.servlet.spec.ServletOutputStreamImpl.close(ServletOutputStreamImpl.java:622) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]
                at org.icepush.http.standard.FixedSizeContentHandler.respond(FixedSizeContentHandler.java:55) [icepush-ee.jar:]
                at org.icepush.http.standard.FixedXMLContentHandler.respond(FixedXMLContentHandler.java:31) [icepush-ee.jar:]
                at org.icepush.BlockingConnectionServer$NoopResponseHandler.respond(BlockingConnectionServer.java:219) [icepush-ee.jar:]
                at org.icepush.SequenceTaggingServer$TaggingRequest$TaggingResponseHandler.respond(SequenceTaggingServer.java:112) [icepush-ee.jar:]
                at org.icepush.servlet.ServletRequestResponse.respondWith(ServletRequestResponse.java:218) [icepush-ee.jar:]
                at org.icepush.servlet.ThreadBlockingAdaptingServlet$ThreadBlockingRequestResponse.respondWith(ThreadBlockingAdaptingServlet.java:70) [icepush-ee.jar:]
                at org.icepush.SequenceTaggingServer$TaggingRequest.respondWith(SequenceTaggingServer.java:67) [icepush-ee.jar:]
                at org.icepush.BlockingConnectionServer.respondIfPendingRequest(BlockingConnectionServer.java:177) [icepush-ee.jar:]
        Show
        Deryk Sinotte added a comment - Testing on Wildfly 8.0.0 showed the following issue: Server warning seen occasionally when navigating between demos, not able to reliably reproduce, no applications seem affected. 13:15:33,564 WARNING [org.icepush.BlockingConnectionServer] (Monitoring scheduler) Exception caught on org.icepush.BlockingConnectionServer TimerTask.: java.lang.RuntimeException: java.io.IOException: UT000029: Channel was closed mid chunk, if you have attempted to write chunked data you cannot shutdown the channel until after it has all been written. at org.icepush.BlockingConnectionServer.respondIfPendingRequest(BlockingConnectionServer.java:180) [icepush-ee.jar:] at org.icepush.BlockingConnectionServer.run(BlockingConnectionServer.java:109) [icepush-ee.jar:] at java.util.TimerThread.mainLoop(Timer.java:555) [rt.jar:1.7.0_07] at java.util.TimerThread.run(Timer.java:505) [rt.jar:1.7.0_07] Caused by: java.io.IOException: UT000029: Channel was closed mid chunk, if you have attempted to write chunked data you cannot shutdown the channel until after it has all been written. at io.undertow.conduits.ChunkedStreamSinkConduit.terminateWrites(ChunkedStreamSinkConduit.java:283) [undertow-core-1.0.0.Final.jar:1.0.0.Final] at org.xnio.conduits.ConduitStreamSinkChannel.shutdownWrites(ConduitStreamSinkChannel.java:178) at io.undertow.channels.DetachableStreamSinkChannel.shutdownWrites(DetachableStreamSinkChannel.java:60) [undertow-core-1.0.0.Final.jar:1.0.0.Final] at io.undertow.servlet.spec.ServletOutputStreamImpl.close(ServletOutputStreamImpl.java:622) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final] at org.icepush.http.standard.FixedSizeContentHandler.respond(FixedSizeContentHandler.java:55) [icepush-ee.jar:] at org.icepush.http.standard.FixedXMLContentHandler.respond(FixedXMLContentHandler.java:31) [icepush-ee.jar:] at org.icepush.BlockingConnectionServer$NoopResponseHandler.respond(BlockingConnectionServer.java:219) [icepush-ee.jar:] at org.icepush.SequenceTaggingServer$TaggingRequest$TaggingResponseHandler.respond(SequenceTaggingServer.java:112) [icepush-ee.jar:] at org.icepush.servlet.ServletRequestResponse.respondWith(ServletRequestResponse.java:218) [icepush-ee.jar:] at org.icepush.servlet.ThreadBlockingAdaptingServlet$ThreadBlockingRequestResponse.respondWith(ThreadBlockingAdaptingServlet.java:70) [icepush-ee.jar:] at org.icepush.SequenceTaggingServer$TaggingRequest.respondWith(SequenceTaggingServer.java:67) [icepush-ee.jar:] at org.icepush.BlockingConnectionServer.respondIfPendingRequest(BlockingConnectionServer.java:177) [icepush-ee.jar:]
        Hide
        Deryk Sinotte added a comment -

        Seems likely that the change that closed the stream may be leading to this. I've checked in changes to the EE 3 maintenance branch, the P02 tag, and the ICEfaces 4 trunk to remove the call to close the stream as well as to rename the class to something that better reflects what it's doing.

        A quick test with Liferay 6.2 shows that the original problem is still fixed.

        Show
        Deryk Sinotte added a comment - Seems likely that the change that closed the stream may be leading to this. I've checked in changes to the EE 3 maintenance branch, the P02 tag, and the ICEfaces 4 trunk to remove the call to close the stream as well as to rename the class to something that better reflects what it's doing. A quick test with Liferay 6.2 shows that the original problem is still fixed.

          People

          • Assignee:
            Deryk Sinotte
            Reporter:
            Deryk Sinotte
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved: