It's already enabled by default in Firefox, and since there's no open issues regarding auto-linking I suppose that we can attempt to enable it unconditionally.
Automatically detect links in the text content of a file and automatically
generate link annotations at the appropriate locations to achieve
automatic link detection and hyperlinking.
This patch is adding some code in order to extract a drawing as curves from an image.
The algorithm is basically the following:
- reduce the dimensions
- make it gray
- apply a bilateral filter in order to add some blurryness while keeping the edges
- compute the histogram
- guess what's the background color which should contain a large majority of the pixels
- make a binary image
- extract the contours in using the Suzuki algorithm
- apply the Douglas-Peucker algorithm in order to reduce the number of points
The algorithm is improvable but it should work pretty well if there's a clear difference between
the background and the drawing.
In a v2 we could use a ML model in order to improve the extraction.
There's few changes related to the UI in order to make the tool usable, but they're very basic
for the moment.
After the removal of 'unsafe-eval' CSP in #18651, WebAssembly fails to
load, resulting in issues such as seen in #18457.
Manifest Version 3 does not allow 'unsafe-eval', does accept the more
specific 'wasm-unsafe-eval' as of Chrome 103. Note that manifest.json
already sets minimum_chrome_version to 103.
This patch also adds `object-src 'self'` because it was required until
Chrome 110. As of Chrome 111, the default is `object-src 'self'` and
`object-src` is no longer required. We could drop `object-src` in the
future, but for now we need to include it to support Chrome 103 - 110.
Fix regression from #18711. `urlFilter` is a key of `condition`, not of
the `rule`. Consequently, the feature detection method failed to detect
the availability of the feature in Chrome 128+.
The minimum required version is Chrome 103 because wildcard support for
web_accessible_resources[].extension_id was introduced in 103, in
c9caeb1a08
A way to broaden compatibility is to drop that key. By doing so, the
minimum required Chrome version is then 96, because of the
chrome.storage.session API (and declarativeNetRequestWithHostAccess).
Since gulpfile.js already defines "Chrome >= 98" and there is no real
point in expanding support to older versions, I'm just setting the
minimum version to 103.
- Replace DOM-based pdfHandler.html (background page) with background.js
(extension service worker).
- Adjust logic of background scripts to account for the fact that the
scripts can execute repeatedly during a browser session. Primarily,
register relevant extension event handlers at the top level and use
in-memory storage.session API to keep track of initialization state.
- Extension URL router: replace blocking webRequest with the service
worker-specific "fetch" event.
- PDF detection: replace blocking webRequest with declarativeNetRequest.
This requires Chrome 128+. The next commit will add a fallback for
earlier Chrome versions.
In MV3, the background script is a service worker. localStorage is not
supported in service workers. Switch to storage.local instead.
Data migration is not implemented because it is not needed due to the
privacy-friendly design of the telemetry: In practice the data is reset
about every 4 weeks, when the major version of Chrome is updated.
restoretab.js was added in https://github.com/mozilla/pdf.js/pull/6233
with the purpose of restoring lost tabs when Chrome closes all extension
tabs when it reloads the extension. This forced reload can happen when
the user toggles the "Allow access to file URLs" option.
This logic does not work any more, and since the use of localStorage is
a blocker in migrating to MV3, this patch just drops the logic.
MV3 does not support chrome_style in options_ui. Remove it and replace
it with the minimal amount of styles that still has some spacing around
the individual settings for readability.
webRequestBlocking is unavailable in MV3. Non-blocking webRequest can
still be used to detect the Referer, but we have to use
declarativeNetRequest to change the Referer header as needed.
In preparation for migrating the Chrome extension to Manifest Version 3,
this patch removes parts of the manifest that are obsolete and/or
unsupported in MV3.
Remove ftp mentions: ftp was dropped from 6 years ago, in Chrome 59.
Remove file_browser_handlers: does not work on Chrome OS according to
https://github.com/mozilla/pdf.js/issues/14161 . MV3 replacement needs
a different manifest key and logic, which will be added later.
Remove content_security_policy: MV3 does not support unsafe-eval CSP,
and PDF.js does not require it. This may result in a mild performance
degradation in PDFs that contain PostScript.
Remove page_action logic: When this logic was originally introduced,
Chrome showed page action buttons in the address bar, which enabled
the extension to display the button on specific PDF viewer tabs only.
In Chrome 49 (2016), pageActions were dropped from the address bar and
put in an UI area that is hidden by default. The user can pin the button
to be unconditionally visible on the toolbar, which is distracting.
Because the UX is no longer serving the original needs, this patch
removes page_action from the manifest.
This patch adds a new entry in the secondary menu in order to open a dialog to let the user:
- disables the alt-text generation thanks to a ML model;
- deletes the alt-text model downloaded in Firefox;
- disabled the new alt-text flow.
The HTML button elements we use are all regular buttons that don't
submit form data to a server. According to
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/button#notes
those buttons should all have their type set to `button` explicitly, but
we only do that a handful of them. This commit fixes the issue by
consistently giving all our buttons the `button` type.
Co-authored-by: Calixte Denizet <calixte.denizet@gmail.com>
This should be a *tiny* bit more efficient, since it avoids parsing substrings that we don't care about.
*Please note:* I cannot find an ESLint rule to enforce this automatically.
The doorhanger for highlighting has a basic color picker composed of 5 predefined colors
to set the default color to use.
These colors can be changed thanks to a preference for now but it's something which could
be changed in the Firefox settings in the future.
Each highlight has in its own toolbar a color picker to just change its color.
The different color pickers are so similar (modulo few differences in their styles) that
this patch introduces a new class ColorPicker which provides a color picker component
which could be reused in future editors.
All in all, a large part of this patch is dedicated to color picker itself and its style
and the rest is almost a matter of wiring the component.
The `viewerCssTheme`-implementation has always been somewhat hacky, and now it's also *partially* broken ever since we've started using CSS nesting.
Trying to support nested media queries would thus require a lot more parsing of the CSS rules, which seems inefficient and thus generally undesirable.[1]
As discussed on Matrix, let's try to remove the `viewerCssTheme`-option and see if there's any (significant) fallout from this.
---
[1] If this option is brought back, it seems to me that it (in Firefox) should probably be set through the platform-code that handles theming.
*Please note:* This only removes the preference itself, however both the viewer-option and the actual implementation is still available.
The `useOnlyCssZoom` functionality was only ever used, by default, in the PDF Viewer for the B2G/FirefoxOS project (which was abandoned years ago). Given that CSS-only zooming can easily make the document look blurry even at low zoom levels, this functionality was only intended for low-powered mobile devices.
Hence it seems reasonable to remove the `useOnlyCssZoom` preference now, since neither the default viewer nor the GeckoView-specific viewer uses this functionality.
Semantically, it is more correct to encode the fragment in the URL
instead of the URL-encoded `file` query parameter. This shouldn't matter
in practice, because `rewriteUrlClosure` in `chromecom.js` decodes the
`file` parameter and restores the fragment. However, as #16625 shows,
there was a case where this did not work as expected.
Right now, the visible pages are redrawn for each scale change.
Consequently, zooming with mouse wheel or in pinching can be pretty janky
(even on a desktop machine but with a hdpi screen).
So the main idea in this patch is to draw the visible pages only once zooming
is finished.