This patch fixes an issue when pasting: an exception was thrown when pasting.
And while writing the test and comparing the paths in the svg, I found a difference
which is fixed thanks to call to the right constructor (to take into account the inheritance)
in inkdraw.js
When scrolling quickly, the constant re-rendering of the detail view
significantly affects rendering performance, causing Firefox to
not render even the _background canvas_, which is just a static canvas
not being re-drawn by JavaScript.
This commit changes the viewer to only render the detail view while
scrolling if its rendering hasn't just been cancelled. This means that:
- when the user is scrolling slowly, we have enough time to render the
detail view before that we need to change its area, so the user always
sees the full screen as high resolution.
- when the user is scrolling quickly, as soon as we have to cancel a
rendering we just give up, and the user will see the lower resolution
canvas. When then the user stops scrolling, we render the detail view
for the new visible area.
When rendering big PDF pages at high zoom levels, we currently fall back
to CSS zoom to avoid rendering canvases with too many pixels. This
causes zoomed in PDF to look blurry, and the text to be potentially
unreadable.
This commit adds support for rendering _part_ of a page (called
`PDFPageDetailView` in the code), so that we can render portion of a
page in a smaller canvas without hiting the maximun canvas size limit.
Specifically, we render an area of that page that is slightly larger
than the area that is visible on the screen (100% larger in each
direction, unless we have to limit it due to the maximum canvas size).
As the user scrolls around the page, we re-render a new area centered
around what is currently visible.
This base class contains the generic logic for:
- Creating a canvas and showing when appropriate
- Rendering in the canvas
- Keeping track of the rendering state
Those type tests are performing type checking on a project using DOM APIs, intended to reflect the usage in a non-node project.
Not loading the node types in that project ensures that the library type declarations don't force a dependency on the node types.
Currently we modify the EXIF-block in place, which may end up "breaking" the JPEG-data of the original PDF document since e.g. saving it from the viewer no longer contains the real EXIF-block.
Hence the EXIF-block replacement is moved into the `JpegStream` class, such that we can copy the data before doing the replacement.
During the XRef stream parsing we're attempting to lookup an entry that hasn't yet been found, since parsing is currently running, and given that we'd also cache free/missing XRef entries we'd then return an incorrect value during normal PDF parsing.
The simplest solution here is to just not cache free/missing XRef entries, since a properly generated PDF document shouldn't be trying to access objects it doesn't contain.
Furthermore, the amount of "extra" parsing now needed for such XRef entries shouldn't be significant enough to be an issue.
It fixes#19505.
We were invaliding throwing actions (in setting event.rc to false) and all the event process was stopped.
Now we're just dumping the exception in the console: the action is skipped and event.rc is not set
else the input fields aren't updated wit KeyStroke actions.
Currently this rule is disabled in a number of spots across the code-base, and unless absolutely necessary we probably shouldn't disable linting, so let's just update the code to fix all the outstanding cases.
If either of the factory-urls are missing or invalid, the fallback value would currently become `useWorkerFetch === null`.
While that is obviously a falsy value, which means that the code still works as intended, we should ensure that this is consistent.
- Use `TypedArray.prototype.set()` rather than a manual loop when building the `key`.
- Use an existing local variable to avoid re-computing the length of the `encryptionKey`.