If either of the factory-urls are missing or invalid, the fallback value would currently become `useWorkerFetch === null`.
While that is obviously a falsy value, which means that the code still works as intended, we should ensure that this is consistent.
- Use `TypedArray.prototype.set()` rather than a manual loop when building the `key`.
- Use an existing local variable to avoid re-computing the length of the `encryptionKey`.
Rather than modifying the "raw" dimensions of the page, we'll instead apply the `userUnit` as an *additional* scale-factor via CSS.
*Please note:* It's not clear to me if this solution is fully correct either, or if there's other problems with it, but it at least *appears* to work.
---
With these changes, the following CSS variables are now assumed to be available/set as necessary: `--total-scale-factor`, `--scale-factor`, `--user-unit`, `--scale-round-x`, and `--scale-round-y`.
Given that most inferred links will overlap existing LinkAnnotations, creating a lot of unused `borderStyle` objects seem unnecessary.
Hence we can move that into the `AnnotationLayer.prototype.addLinkAnnotations` method instead, which also allows us to slightly reduce the API-surface.
Currently we have three separate and virtually identical message handlers for this data, which can easily be combined into a single message handler instead.
Automatically detect links in the text content of a file and automatically
generate link annotations at the appropriate locations to achieve
automatic link detection and hyperlinking.
Given that we now have a few different factory-url parameters, we introduce a helper function for parsing them.
*Please note:* These parameters have always been documented as requiring a trailing slash[1], which we can now easily enforce during the `getDocument`-call.
---
[1] I recall that we've occasionally seen issues because users miss that detail, and the new Error should hopefully be more easily actionable than one thrown during rendering/parsing.
This CSS feature is now available in *most* browsers that we support, with old Chromium-based browsers being the only exception; please see https://developer.mozilla.org/en-US/docs/Web/CSS/color_value/color-mix#browser_compatibility
From this data we see that the feature in question has been supported since Chrome 111, which was released on 2023-03-01 (i.e. almost two years ago).
Please note that we've never guaranteed that all features and functionality will be available in the oldest supported browsers.
Furthermore, even with the `color-mix` fallback removed PopupAnnotations will still function just as before but may render with the default color (defined in the CSS-file) rather than the one specified in the PDF document.
At the time of PR 12896 the `fontBoundingBox{Ascent, Descent}` properties were not yet available by default in Fírefox, however that's no longer the case since Firefox 116; please see https://bugzilla.mozilla.org/show_bug.cgi?id=1801198.
Hence this patch which replaces the "full" fallback with a warning and uses the `ascent`/`descent` values from the fonts in the PDF document (as we did previously). Obviously the TextLayer won't look as good in that case, but it's a simpler and shorter solution.
Currently we're manually computing the width/height of the /Rect-entry in a number of spots throughout the worker-thread Annotation code, which these new getters help avoid.
Currently we're initializing the image-options for every page, which seems unnecessary since it should suffice to do that once per document.
Also, changes the `BasePdfManager` constructor to improve readability/documentation a little bit.
This patch is adding some code in order to extract a drawing as curves from an image.
The algorithm is basically the following:
- reduce the dimensions
- make it gray
- apply a bilateral filter in order to add some blurryness while keeping the edges
- compute the histogram
- guess what's the background color which should contain a large majority of the pixels
- make a binary image
- extract the contours in using the Suzuki algorithm
- apply the Douglas-Peucker algorithm in order to reduce the number of points
The algorithm is improvable but it should work pretty well if there's a clear difference between
the background and the drawing.
In a v2 we could use a ML model in order to improve the extraction.
There's few changes related to the UI in order to make the tool usable, but they're very basic
for the moment.