Skip to content

Add new streaming mode for LargeText#722

Merged
timja merged 5 commits intojenkinsci:masterfrom
das7pad:large-text-streaming
Oct 22, 2025
Merged

Add new streaming mode for LargeText#722
timja merged 5 commits intojenkinsci:masterfrom
das7pad:large-text-streaming

Conversation

@das7pad
Copy link
Contributor

@das7pad das7pad commented Oct 7, 2025

This PR adds a new streaming mode for LargeText that allows unbuffered reads (on the stapler layer).

There are two implementations in this PR: a "simple" mixed content type response (<log>\n<metadata>); and a proper RFC compliant multipart/form-data response body behind standard content negotiation via Accepts.

Support for multipart/form-data is good in the browser (part of the fetch spec), but rather poor in Java/Jenkins test framework. With that said and after a few iterations on the implementation, I'm quite happy with the multipart approach and actually prefer it over the other mode.

Example:

GET /job/foo/210/execution/node/36/log/logText/progressiveHtml?start=42
Accept: multipart/form-data
Content-Type: multipart/form-data;boundary=<uuid>
--<uuid>
Content-Disposition: form-data;name=text
Content-Type: text/html;charset=UTF-8

<log text from position 42>
--<uuid>
Content-Disposition: form-data;name=meta
Content-Type: application/json;charset=utf-8

{"completed":true,"start":42,"consoleAnnotator":"<...>","end":1337}
--<uuid>--
full curl

Build: Stapler -> Jenkins -> Workflow-api -> pipeline-graph-view-plugin

$ curl -i -H 'Accept: multipart/form-data' 'http://localhost:8080/jenkins/job/issues/job/one%20step%20and%20large%20output/48/execution/node/13/log/logText/progressiveHtml?start=15040'
HTTP/1.1 200 OK
Server: Jetty(12.0.22)
Date: Tue, 07 Oct 2025 21:56:40 GMT
X-Content-Type-Options: nosniff
Stapler-Trace-001: -> evaluate(<hudson.model.Hudson@7716f588> :hudson.model.Hudson,"/job/issues/job/one%20step%20and%20large%20output/48/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-002: -> evaluate(((StaplerProxy)<hudson.model.Hudson@7716f588>).getTarget(),"/job/issues/job/one%20step%20and%20large%20output/48/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-003: -> evaluate(<hudson.model.Hudson@7716f588>.getJob("issues"),"/job/one%20step%20and%20large%20output/48/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-004: -> evaluate(<com.cloudbees.hudson.plugins.folder.Folder@5d8ce06f[issues]> :com.cloudbees.hudson.plugins.folder.Folder,"/job/one%20step%20and%20large%20output/48/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-005: -> evaluate(((StaplerProxy)<com.cloudbees.hudson.plugins.folder.Folder@5d8ce06f[issues]>).getTarget(),"/job/one%20step%20and%20large%20output/48/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-006: -> evaluate(<com.cloudbees.hudson.plugins.folder.Folder@5d8ce06f[issues]>.getJob("one step and large output"),"/48/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-007: -> evaluate(<org.jenkinsci.plugins.workflow.job.WorkflowJob@1eee4b09[issues/one step and large output]> :org.jenkinsci.plugins.workflow.job.WorkflowJob,"/48/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-008: -> evaluate(((StaplerProxy)<org.jenkinsci.plugins.workflow.job.WorkflowJob@1eee4b09[issues/one step and large output]>).getTarget(),"/48/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-009: -> evaluate(<org.jenkinsci.plugins.workflow.job.WorkflowJob@1eee4b09[issues/one step and large output]>.getDynamic("48",...),"/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-010: -> evaluate(<issues/one step and large output #48> :org.jenkinsci.plugins.workflow.job.WorkflowRun,"/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-011: -> evaluate(((StaplerProxy)<issues/one step and large output #48>).getTarget(),"/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-012: -> evaluate(<issues/one step and large output #48>.getExecution(),"/node/13/log/logText/progressiveHtml")
Stapler-Trace-013: -> evaluate(<CpsFlowExecution[issues/one step and large output#48]> :org.jenkinsci.plugins.workflow.cps.CpsFlowExecution,"/node/13/log/logText/progressiveHtml")
Stapler-Trace-014: -> evaluate(<CpsFlowExecution[issues/one step and large output#48]>.getNode("13"),"/log/logText/progressiveHtml")
Stapler-Trace-015: -> evaluate(<StepAtomNode[id=13, exec=CpsFlowExecution[issues/one step and large output#48]]> :org.jenkinsci.plugins.workflow.cps.nodes.StepAtomNode,"/log/logText/progressiveHtml")
Stapler-Trace-016: -> evaluate(<StepAtomNode[id=13, exec=CpsFlowExecution[issues/one step and large output#48]]>.getDynamic("log",...),"/logText/progressiveHtml")
Stapler-Trace-017: -> evaluate(<org.jenkinsci.plugins.workflow.support.actions.LogStorageAction@2aa8ca8d> :org.jenkinsci.plugins.workflow.support.actions.LogStorageAction,"/logText/progressiveHtml")
Stapler-Trace-018: -> evaluate(<org.jenkinsci.plugins.workflow.support.actions.LogStorageAction@2aa8ca8d>.getLogText(),"/progressiveHtml")
Stapler-Trace-019: -> evaluate(<hudson.console.AnnotatedLargeText@65bd38ed> :hudson.console.AnnotatedLargeText,"/progressiveHtml")
Stapler-Trace-020: -> <hudson.console.AnnotatedLargeText@65bd38ed>.doProgressiveHtml(...)
Content-Type: multipart/form-data;boundary=ea7fbd0c-8bae-4fc4-9e0f-2af7965618e3;charset=utf-8
Transfer-Encoding: chunked

--ea7fbd0c-8bae-4fc4-9e0f-2af7965618e3
Content-Disposition: form-data;name=text
Content-Type: text/html;charset=utf-8

<span class="timestamp"><b>23:56:00</b> </span><span style="display: none">[2025-10-07T21:56:00.505Z]</span> + echo Slept 116 times
<span class="timestamp"><b>23:56:00</b> </span><span style="display: none">[2025-10-07T21:56:00.505Z]</span> Slept 116 times
<span class="timestamp"><b>23:56:00</b> </span><span style="display: none">[2025-10-07T21:56:00.505Z]</span> + sleep 0.2
<span class="timestamp"><b>23:56:00</b> </span><span style="display: none">[2025-10-07T21:56:00.755Z]</span> + echo Slept 117 times
<span class="timestamp"><b>23:56:00</b> </span><span style="display: none">[2025-10-07T21:56:00.755Z]</span> Slept 117 times
<span class="timestamp"><b>23:56:00</b> </span><span style="display: none">[2025-10-07T21:56:00.755Z]</span> + sleep 0.2
<span class="timestamp"><b>23:56:01</b> </span><span style="display: none">[2025-10-07T21:56:01.006Z]</span> + echo Slept 118 times
<span class="timestamp"><b>23:56:01</b> </span><span style="display: none">[2025-10-07T21:56:01.006Z]</span> Slept 118 times
<span class="timestamp"><b>23:56:01</b> </span><span style="display: none">[2025-10-07T21:56:01.006Z]</span> + sleep 0.2
<span class="timestamp"><b>23:56:01</b> </span><span style="display: none">[2025-10-07T21:56:01.006Z]</span> + echo Slept 119 times
<span class="timestamp"><b>23:56:01</b> </span><span style="display: none">[2025-10-07T21:56:01.006Z]</span> Slept 119 times
<span class="timestamp"><b>23:56:01</b> </span><span style="display: none">[2025-10-07T21:56:01.006Z]</span> + sleep 0.2
<span class="timestamp"><b>23:56:01</b> </span><span style="display: none">[2025-10-07T21:56:01.257Z]</span> + echo Slept 120 times
<span class="timestamp"><b>23:56:01</b> </span><span style="display: none">[2025-10-07T21:56:01.257Z]</span> Slept 120 times

--ea7fbd0c-8bae-4fc4-9e0f-2af7965618e3
Content-Disposition: form-data;name=meta
Content-Type: application/json;charset=utf-8

{"completed":true,"start":15040,"consoleAnnotator":"29WvJ6Nq5NEMeDyXEB5HvKPF+yUI8E1iE0pxj1zEnWQge/Qhtvl07DSrrRkXBg7CYFBkhn7Vdlrxgdn02RUBUvo1P4+mDdtgVm2KvYnG4jlxQPBnGhk/ml2/2zAlTw4eHQVwpd+oJlOnoilotnczf4AtApdjir0Y4Q3qSsZC4aT8Zh8kKSWSXpZIp0Te2+dnew+xBqiClRUdI91+/STvI3CGuKS01uhowwVEvA3PB7excDxQge/mcHZNa6ea9SMkKBVQBSxpRU2+1GPhJy9IUzjQzjJ1AnBIky+I26k7wCvlaUc5o+nTW9UsEXKcADHy","end":15661}
--ea7fbd0c-8bae-4fc4-9e0f-2af7965618e3--

Tail

 curl -i -H 'Accept: multipart/form-data' 'http://localhost:8080/jenkins/job/issues/job/one%20step%20and%20large%20output/48/execution/node/13/log/logText/progressiveHtml?start=-200'
HTTP/1.1 200 OK
Server: Jetty(12.0.22)
Date: Tue, 07 Oct 2025 22:01:38 GMT
X-Content-Type-Options: nosniff
Stapler-Trace-001: -> evaluate(<hudson.model.Hudson@7716f588> :hudson.model.Hudson,"/job/issues/job/one%20step%20and%20large%20output/48/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-002: -> evaluate(((StaplerProxy)<hudson.model.Hudson@7716f588>).getTarget(),"/job/issues/job/one%20step%20and%20large%20output/48/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-003: -> evaluate(<hudson.model.Hudson@7716f588>.getJob("issues"),"/job/one%20step%20and%20large%20output/48/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-004: -> evaluate(<com.cloudbees.hudson.plugins.folder.Folder@5d8ce06f[issues]> :com.cloudbees.hudson.plugins.folder.Folder,"/job/one%20step%20and%20large%20output/48/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-005: -> evaluate(((StaplerProxy)<com.cloudbees.hudson.plugins.folder.Folder@5d8ce06f[issues]>).getTarget(),"/job/one%20step%20and%20large%20output/48/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-006: -> evaluate(<com.cloudbees.hudson.plugins.folder.Folder@5d8ce06f[issues]>.getJob("one step and large output"),"/48/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-007: -> evaluate(<org.jenkinsci.plugins.workflow.job.WorkflowJob@1eee4b09[issues/one step and large output]> :org.jenkinsci.plugins.workflow.job.WorkflowJob,"/48/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-008: -> evaluate(((StaplerProxy)<org.jenkinsci.plugins.workflow.job.WorkflowJob@1eee4b09[issues/one step and large output]>).getTarget(),"/48/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-009: -> evaluate(<org.jenkinsci.plugins.workflow.job.WorkflowJob@1eee4b09[issues/one step and large output]>.getDynamic("48",...),"/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-010: -> evaluate(<issues/one step and large output #48> :org.jenkinsci.plugins.workflow.job.WorkflowRun,"/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-011: -> evaluate(((StaplerProxy)<issues/one step and large output #48>).getTarget(),"/execution/node/13/log/logText/progressiveHtml")
Stapler-Trace-012: -> evaluate(<issues/one step and large output #48>.getExecution(),"/node/13/log/logText/progressiveHtml")
Stapler-Trace-013: -> evaluate(<CpsFlowExecution[issues/one step and large output#48]> :org.jenkinsci.plugins.workflow.cps.CpsFlowExecution,"/node/13/log/logText/progressiveHtml")
Stapler-Trace-014: -> evaluate(<CpsFlowExecution[issues/one step and large output#48]>.getNode("13"),"/log/logText/progressiveHtml")
Stapler-Trace-015: -> evaluate(<StepAtomNode[id=13, exec=CpsFlowExecution[issues/one step and large output#48]]> :org.jenkinsci.plugins.workflow.cps.nodes.StepAtomNode,"/log/logText/progressiveHtml")
Stapler-Trace-016: -> evaluate(<StepAtomNode[id=13, exec=CpsFlowExecution[issues/one step and large output#48]]>.getDynamic("log",...),"/logText/progressiveHtml")
Stapler-Trace-017: -> evaluate(<org.jenkinsci.plugins.workflow.support.actions.LogStorageAction@2aa8ca8d> :org.jenkinsci.plugins.workflow.support.actions.LogStorageAction,"/logText/progressiveHtml")
Stapler-Trace-018: -> evaluate(<org.jenkinsci.plugins.workflow.support.actions.LogStorageAction@2aa8ca8d>.getLogText(),"/progressiveHtml")
Stapler-Trace-019: -> evaluate(<hudson.console.AnnotatedLargeText@6e889c8f> :hudson.console.AnnotatedLargeText,"/progressiveHtml")
Stapler-Trace-020: -> <hudson.console.AnnotatedLargeText@6e889c8f>.doProgressiveHtml(...)
Content-Type: multipart/form-data;boundary=6991e90e-73e1-44c5-9595-45349de16ee6;charset=utf-8
Transfer-Encoding: chunked

--6991e90e-73e1-44c5-9595-45349de16ee6
Content-Disposition: form-data;name=text
Content-Type: text/html;charset=utf-8

<span class="timestamp"><b>23:56:01</b> </span><span style="display: none">[2025-10-07T21:56:01.006Z]</span> Slept 119 times
<span class="timestamp"><b>23:56:01</b> </span><span style="display: none">[2025-10-07T21:56:01.006Z]</span> + sleep 0.2
<span class="timestamp"><b>23:56:01</b> </span><span style="display: none">[2025-10-07T21:56:01.257Z]</span> + echo Slept 120 times
<span class="timestamp"><b>23:56:01</b> </span><span style="display: none">[2025-10-07T21:56:01.257Z]</span> Slept 120 times

--6991e90e-73e1-44c5-9595-45349de16ee6
Content-Disposition: form-data;name=meta
Content-Type: application/json;charset=utf-8

{"completed":true,"startFromNewLine":true,"start":15486,"consoleAnnotator":"29WvJ6Nq5NEMeDyXEB5HvH8SHvGOyYZDR/nOLpGd10TCByYndbhHBYGFMIxlMyqHCGtk2Yw2kJ/EulH83QM9o4RuZzyJrKyzJepx4a8Fg8gmjE5W9mMcmrmo7DxdepH4F0p3EhJHiWlKyze+knPQLY6ZHfmEtVEc+NkDJ3O/a0s4L0Qk5i/H1U89CeVYULscoDofrFBuAEGKBvM/lW2PDBd2oHiPan0QHgzNdCuQcBavrhkh4F//UtfezJq+U963gKlXOJfG+0cn37qxhlb0TL8UK37YEVCNdXvcAiHNCk+OYIl+Qk5wnKjs9zbO0HdR","end":15661}
--6991e90e-73e1-44c5-9595-45349de16ee6--
pipeline {
    agent any
    options { timestamps() }

    stages {
        stage('Hello') {
            steps {
                script {
                    for(int i=1; i<=1; i++) {
                        stage("Stage - ${i}") {
                            sh 'for i in `seq 120`; do sleep 0.2; echo "Slept ${i} times"; done'
                        }
                    }
                }
            }
        }
    }
}

I'll copy the implementation notes for each commit here:

  • Simple implementation:

    • Clients request this new streaming mode via the request header
      X-Streaming: true.
      The server echoes it back to signal support for it.

    • A final new line is added to the body with a JSON payload that
      includes the start/end/completed state plus additional metadata from
      child classes. E.g. AnnotatedLargeText in Jenkins core will add the
      ConsoleAnnotator state when incrementally reading annotations.

      We are effectively creating a multi-part response with the first part
      being the plain-text/html console log followed by the second part with
      the metadata.

    • Bonus: Drop the line end normalization that was required for internet
      explorer. IE is long dead.

    • Bonus: Read the log in larger chunks of 64KiB and skip the buffering
      of each line.
      1kiB disk reads are far too small for today's disks, from NVMe disks
      to spinning rust; with all the other buffering removed, going to 1MiB
      would probably be fine too.
      The default stream buffer size in Node.js is 64KiB: https://nodejs.org/docs/latest-v22.x/api/stream.html#streamgetdefaulthighwatermarkobjectmode

    • Bonus: Add support for tailing via a negative ?start query parameter
      in LargeText. It will try to find the next new line boundary to start
      reading from and fallback to the fixed tail if missing. The JSON blob
      will contain "startFromNewLine":true when starting from a new line.
      The same heuristic is available when fetching more bytes via
      ?searchNewLineUntil=<last start>.

  • multipart/form-data implementation

    • Clients request this new streaming mode via the standard request
      header 'Accept: multipart/form-data'. The server responds with a
      standard multipart response, i.e.
      'Content-Type: multipart/form-data;boundary='.
    • The browser fetch api has support for reading form-data multipart
      responses using 'response.formData()'.
    • The multipart response contains two parts:
      • "name=text", the original LargeText content and encoding as provided
        by the source (default UTF-8) via 'setContentType()'.
      • "name=meta", the json payload with streaming metadata as described
        in the previous commit.

Testing done

See extensive test suite. Adopt it in progressive text in Jenkins. Adopt it in the pipeline-graph-view-plugin.

Submitter checklist

  • Make sure you are opening from a topic/feature/bugfix branch (right side) and not your main branch!
  • Ensure that the pull request title represents the desired changelog entry
  • Please describe what you did
  • Link to relevant issues in GitHub or Jira
  • Link to relevant pull requests, esp. upstream and downstream changes
  • Ensure you have provided tests that demonstrate the feature works or the issue is fixed

- Clients request this new streaming mode via the request header
  `X-Streaming: true`.
  The server echoes it back to signal support for it.
- A final new line is added to the body with a JSON payload that
  includes the start/end/completed state plus additional metadata from
  child classes. E.g. AnnotatedLargeText in Jenkins core will add the
  ConsoleAnnotator state when incrementally reading annotations.

  We are effectively creating a multi-part response with the first part
  being the plain-text/html console log followed by the second part with
  the metadata.
- Bonus: Drop the line end normalization that was required for internet
  explorer. IE is long dead.
- Bonus: Read the log in larger chunks of 64KiB and skip the buffering
  of each line.
  1kiB disk reads are far too small for today's disks, from NVMe disks
  to spinning rust; with all the other buffering removed, going to 1MiB
  would probably be fine too.
- Bonus: Add support for tailing via a negative `?start` query parameter
  in LargeText. It will try to find the next new line boundary to start
  reading from and fallback to the fixed tail if missing. The JSON blob
  will contain "startFromNewLine":true when starting from a new line.
  The same heuristic is available when fetching more bytes via
  `?searchNewLineUntil=<last start>`.
- Clients request this new streaming mode via the standard request
  header 'Accept: multipart/form-data'. The server responds with a
  standard multipart response, i.e.
  'Content-Type: multipart/form-data;boundary=<uuid>'.
- The browser fetch api has support for reading form-data multipart
  responses using 'response.formData()'.
- The multipart response contains two parts:
  - "name=text", the original LargeText content and encoding as provided
    by the source (default UTF-8) via 'setContentType()'.
  - "name=meta", the json payload with streaming metadata as described
    in the previous commit.
- use descriptive variables
- move ?searchNewLineUntil parameter name into constant
- use text blocks instead of escaping JSON in simple strings

Co-authored-by: Tim Jacomb <21194782+timja@users.noreply.github.com>
@das7pad
Copy link
Contributor Author

das7pad commented Oct 8, 2025

Thanks again for the review, Tim. I've addressed your points in 3a6f536.

Copy link
Member

@timja timja left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I would prefer another review though if possible.

@jglick
Copy link
Member

jglick commented Oct 16, 2025

The multipart system seems somewhat tricky; could you not just use WebSocket?

At any rate, looks OK to me from a quick glance but I do not have time for any sort of thorough review I am afraid.

@das7pad
Copy link
Contributor Author

das7pad commented Oct 17, 2025

The multipart system seems somewhat tricky;

Did you see the "simple implementation"?

could you not just use WebSocket?

Something like the following?

  • progressive-text / pipeline-graph-view-plugin opens a WebSocket with ?start=x and attaches an event handler
  • stapler performs WebSocket negotion and creates a writer that emits a WebSocket event for every write (data + read offset)
  • Jenkins AnnotatedText sits somewhere in between and creates a single ConsoleAnnotationOutputStream per WebSocket
  • stapler streams the log one buffered read (e.g. 64KiB) at a time and sleeps for 1s in case the read had 0 length.

Using SSE / EventSource would be simpler then as we are describing a one way data flow and we avoid all the WebSocket overhead.

@das7pad
Copy link
Contributor Author

das7pad commented Oct 18, 2025

On using SSE/EventSource:

https://developer.mozilla.org/en-US/docs/Web/API/EventSource
Warning: When not used over HTTP/2, SSE suffers from a limitation to the maximum number of open connections, which can be specially painful when opening various tabs as the limit is per browser and set to a very low number (6).

That would imply a limit of 6 concurrent Jenkins browser tabs, which seems quite low. Presumably we want to retain HTTP/1.1 support? We could fall back to polling when detecting no messages on the stream, which could indicate that too many connections are open already. What do we fallback to though? The old streaming that results in excessive memory usage/OOM that wanted to fix here?

Apart from the connection limit, SSE is trivial to implement as there is nothing to negotiate, "just" send Content-Type: text/event-source and keep writing to the response without buffering.


Leaving the transport aside, I've taken a look at implementing this. An immediate issue that came up is that completed is static inside stapler/LargeText. The streaming has no ability to know when to stop the polling. We could add a limit of n-consecutive empty reads. A low value results in excessive overhead from frequently re-opening a new WebSocket/EventSource. A high value is asking for DoS.

@timja
Copy link
Member

timja commented Oct 18, 2025

That would imply a limit of 6 concurrent Jenkins browser tabs

Can we shutdown the SSE stream when the tab visibility is changed? visibilitychange
Then when its refocused ask for the missing data?

Presumably we want to retain HTTP/1.1 support?

Yes

What do we fallback to though?

This new implementation potentially?

@das7pad
Copy link
Contributor Author

das7pad commented Oct 22, 2025

That would imply a limit of 6 concurrent Jenkins browser tabs

Can we shutdown the SSE stream when the tab visibility is changed? visibilitychange Then when its refocused ask for the missing data?

👍 Good idea!

Presumably we want to retain HTTP/1.1 support?

Yes

👍

What do we fallback to though?

This new implementation potentially?

👍 WDYT about getting this implementation out then and follow up with SSE/WebSocket afterwards?

@timja timja merged commit 425108f into jenkinsci:master Oct 22, 2025
14 checks passed
@das7pad
Copy link
Contributor Author

das7pad commented Nov 2, 2025

Leaving the transport aside, I've taken a look at implementing this. An immediate issue that came up is that completed is static inside stapler/LargeText. The streaming has no ability to know when to stop the polling. We could add a limit of n-consecutive empty reads. A low value results in excessive overhead from frequently re-opening a new WebSocket/EventSource. A high value is asking for DoS.

I've taken another look at this. It will be quite some work to make this work as all the usages of LargeText/AnnotatedLargeText will need updating to let the WebSocket/SSE polling obtain the latest completed state. This could either be done with a new parameter getCompletedState, or a new method that the usages of LargeText/AnnotatedLargeText override. That seems fairly straight forward, "just" quite a lot of small changes in many places (Jenkins core and lots of plugins).
I'm not yet sure what to do with the step logs. They currently have a "fixed" size after constructing the AnnotatedLargeText: both the total raw log size (i.e. the file that stores all the step logs combined) and the computed step log size (i.e. sum of ranges found in the index file) are locked in. We could add a new method to the LargeText.Source interface for refreshing the log size.

@jglick
Copy link
Member

jglick commented Nov 3, 2025

To be clear, I never requested WebSocket support. I was just surprised to see new code which logically involves real-time streaming not making use of it. If what you have works well enough, fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants