@ryanking13 added these very nice error messages to the `ModuleNotFound`
errors. However they introduce a few problems:
1. `find_spec` is supposed to return `None` or a spec and not to raise an error.
If it raises errors, it can cause trouble in code that tries to check if a module is
installed or not.
2. Other code that tries to add new import hooks has to know to insert them
before these error-raising import hooks.
See the discussion in #3262.
This instead patches `importlib._bootstrap` to create a function called
`_get_module_not_found_error`. We then can monkey patch this to modify
the error messages that `importlib` raises.
See Python issue: https://github.com/python/cpython/issues/100208
If a JavaScript has a `next` method and exactly one of `Symbol.iterator` or `Symbol.asyncIterator`
we use that to tell us whether we think `next` is sync or async. If both or neither of these
Symbols are present then we define both `__next__` and `__anext__`.
Continuing with my work on support for generators, this adds `athrow` and
`aclose` for async generators.
Currently if you use `athrow` and the generator doesn't catch the error, it will
throw a double wrapped error. This is probably also a problem with async calls
Python --> Js --> Python because the way that we control the Python exception
lifetime only works correctly in synchronous contexts. Fixing this is a TODO.
Also, there is still a question of what to do with return values of async Javascript
generators. Currently I think they are available from Python as
`async_stop_iteration_exception.args[0]`.
This adds tests of all the `MutableMapping` methods for objects returned by
`as_object_map`. They all worked but I was accidentally adding only the non
mutating methods. I also added `JsMap` and `JsMutableMap` to `pyodide.ffi`
and fixed it so that `isinstance` works correctly and mypy agrees with the usage.
Adds a new key in meta.yaml spec: requirements/executable which specifies the list of executables required to build a package. Unlike conda, we don't build or install these executables. This key exists just to halt build earlier
Co-authored-by: Hood Chatham <roberthoodchatham@gmail.com>
This defines `__aiter__` for JavaScript objects with a `Symbol.iterator`.
It also defines `anext` and `asend` for JavaScript objects with a `next` function.
These will fail with a suggestion to use `next`/`send` instead if the `next` function
does not return a promise.
This adds methods `keys`, `items`, `values`, `get`, `pop`, `setdefault`, `popitem`,
`update`, and `clear` to a `JsProxy` of a map-like object. Since both Sequence-like
objects and Map-like objects have a `pop` method but they have different
signatures, I had to split the `JsProxy` documentation class into several different
classes. This is probably a good idea anyways. As a followup I could set it up so
that `isinstance` and `issubclass` work correctly on these documentation classes.
Before we had a single `JsProxy` documentation class. Apparently mypy treated
it as `Any`. It had a bunch of methods on it that may or may not appear in
any specific ` sProxy`.
This does two things:
1. Split up `JsProxy` class into several synthetic subclasses like `JsArray`,
`JsBuffer` etc. These work much better with mypy, they should also improve
documentation layout and it helps when different subclassess have different
methods with the same name (e.g., `JsArray` and `JsMap` both will have `pop`
methods).
2. Make `isinstance` and `issubclass` work correctly both with synthetic `JsProxy`
classes like `JsArray`, `JsBuffer` etc and with `type(an_actual_jsproxy)`.
This cleans up the bootstrapping mess in `_importhook` because the `JsProxy`
from `_core_docs` works fine for instance checks now.
I had to make changes to various other Python files
because mypy now understands the types better and noticed that there were type errors.
For instance, this fixed several minor mistakes in the types in `http.py`.
pycryptodomex was added in #2966, but it had an invalid recipe name meta.yml (not meta.yaml) so our build system didn't build that package.
I found that in #3006, and I also found that it is not building well, so I disabled it then. So, in other words, pycryptodomex never worked in Pyodide.
I would like to remove it from the changelog and the repository for now, so that we don't add it to our next stable release accidentally. Perhaps someone interested can re-add this package.
This is needed to finish getting numpy tests working.
It won't work for packages that need shared libraries without some extra intervention.
ryanking's work in #3234 would help to fix this.
pip install scipy makes the command line runner extremely slow. Without scipy installed,
python -c 'print(1)' runs in about 1 second, but with it installed it takes more like 10 seconds
(time to load clapack_all.so and 111 different .so files in scipy, totaling 20 megabytes). We
have to load all of this despite the fact that we won't use any of it.
This fixes two minor bugs when building packages:
- Shared libraries are not copied into dist directory when it is already built.
- When build fails, the shared library zip file remains in the package directory.
Co-authored-by: Hood Chatham <roberthoodchatham@gmail.com>
This adds libheif and Python packages that use libheif to support HEIC image format.
Note that this is "decoder" only. libheif use libde265 for decoder and libx265 for encoder, and I only added libde265 in this PR. I think decoding is more important in our use case, hence this PR.
This PR adds an API pyodide.mountNativeFS which mounts FileSystemDirectoryHandle into a Pyodide Python file system.
Note that there are some limitations such as:
- File System Access API is only supported by Chromium-based browsers (Safari also implements a portion of it but they only support Origin Private File system so it is not very useful for common users).
- The file system is asynchronous, so one needs to call syncfs to persist changes.
Since it is asynchronous, it does not require any WebWorker, SharedArrayBuffer, or Atomics. But I think we can extend this to a synchronous version using those features
Some people have expressed the difficulty that certain JavaScript frameworks
use `this` in critical ways and so a `PyProxy` callback cannot be handed to them
because there is no way for the `PyProxy` to get access to the `this` argument.
See #2901 for example. The goal of this PR is to address this. We cannot always
pass `this` through because usually it isn't useful. So the idea is to add an internal
setting on the `PyProxy` indicating whether it should be passed and some way to
set this setting. Currently I've added a method called `captureThis` which indicates
that `this` should be passed as an argument to Python.
This uses sed to insert /* webpackIgnore: true */ comments into pyodide.js.
This resolves#3087. I also enabled a check that a simple webpack config
builds without warnings. In the future it would be good to add a test that it
also runs without error.
This adds the distributed binary files to the "exports" section of the package.json file.
This allows the files to be referenced, e.g. by `require.resolve('pyodide/distutils.tar')`
which is useful for tools like webpack.
This updates scipy to v1.9.1. This was mercifully easy:
* for now we disable meson
* we dropped `patches/0014-BUG-Fix-signature-of-D_IIR_forback-1-2.patch` since it was upstreamed
* we had to add a patch to put fitpack back into a shape that makes f2c happy
* we need one more `-Wincompatible-function-pointer-types` fix upstream PR: https://github.com/scipy/scipy/pull/16934
This commit changes how we load packages.
Before, we loaded all shared libraries first then all Python packages.
All shared libraries / Python packages were loaded in a random order,
which might cause a problem if there is a load-time dependency between libraries or between packages.
Now we load them in a topologically sorted order regarding dependencies.
The command line runner in #2976 finally works, but it is a large change set so
I am planning to split it up. This is the first PR split off from there.
This PR adds a patch to Python to expose pymain_run_python. We cannot use the
public API Py_RunMain for this purpose because it finalizes the Python
interpreter with Py_FinalizeEx when it is done. If we start an async task with
Py_RunMain then it will segfault. pymain_run_python does a large amount of work
and reproducing it is undesirable.
I added an args parameter which accepts command line arguments. The private
entry point pyodide._main._run_main() executes the main Python entrypoint
without shutting down the interpreter and returns the return code.
This fixes#3011.
We attempted to make a test for this, but in all of our tests
`document.location === indexURL`. Our logic incorrectly used indexURL
instead of `document.location`.
As discussed in #2940, this PR changes the default value of the fullStdLib flag in loadPyodide to false.
This is a breaking change because for now distutils is not loaded by default.
Also, when fullStdLib is set to true, it loads all unvendored stdlibs except for test.
Co-authored-by: Hood Chatham <roberthoodchatham@gmail.com>
Unvendor the standard library sqlite3 to reduce the size of the main module. It reduces the size of pyodide.asm.wasm around ~1.4MB.
Co-authored-by: Hood Chatham <roberthoodchatham@gmail.com>
This PR reverts the indentation change introduced in #2909, to make sure badges
are displayed correctly. I've checked at pyodide.org/en/latest/project/changelog.html
and badges are, in fact, temporarily broken.
This script will run with the target environment variables and
sysconfigdata and with the pywasmcross compiler symlinks.
Any changes to the environment will persist to the main build
step but will not be seen in the post step (or anything else
done outside of the cross build environment). The working
directory for this script is the source directory.
This option is intended to be used with micropip.freeze. A user can
save the lockfile generated by micropip.freeze and load that lock
file while using the rest of the files from the CDN.
Closes#2747
* renames packages.json to repodata.json
* renames the corresponding JS and Python variables to be a bit more explicit.
Tangentially related to #795
This enables WASM_BIGINT while maintaining (hypothetical) Safari 14 support
by shimming BigInt64Array and BigUint64Array if they are missing. I think the
last time we tried to enable WASM_BIGINT was before #2019 so our chances
are significantly better this time.
This will fix dynamic linking bugs and yields a minor reduction in code size.
We use `packaging.tags.sys_tags` to get the list of supported tags
then use `packaging.utils.parse_wheel_filename` to get the set of
tags the current wheel implement then check if one of the wheel's
tags is a supported tag. This is a fully accurate check method and
could also catch things like abi3 wheels that are compatible with
multiple Python versions.
sysconfig.py uses the environment variable `_PYTHON_SYSCONFIGDATA_NAME`
to decide where to look for the sysconfig data file with info about the compile target.
We also need to separately insure that our sysconfig data file is on the path. We
don't want the rest of our target stdlib on the path, so I made an extra sysconfigdata
folder, copied the sysconfig data into it, and put it on the path.
This adds a function `micropip.freeze()` which creates a `packages.json`
lock file which includes all packages that were loaded by micropip in the
current Pyodide session. This can be used in subsequent sessions to load
the same set of packages with `pyodide.loadPackage`.
For example in our repl:
```py
from js.console import log
import micropip
await micropip.install("sphinx-version-warning") # Installs 19 wheels from pypi
log(micropip.freeze())
```
Then opening the browser console, we can copy the JSON and make a new
`packages.json` file out of it. (Our repl will just say "<Long output truncated>"
but the browser console truncates it and provides tools to make it easy to copy.)
Reloading the page with the new `packages.json`, `versionwarning` will autoload:
```py
import versionwarning # Automatically loads 19 wheels from PyPI
```
We detected that `versionwarning` is an export provided by the `sphinx-version-warning`
package via the `top_level.txt` file. This file is part of the `egg-info` spec but isn't
mentioned anywhere in the `dist-info` spec. But wheels seem to include it nonetheless.
BrowserFS can mount custom filesystems into Emscripten.
However it requires the PATH and ERRNO_CODES exports from
Emscripten in addition to FS.
This exports `PATH` and `ERRNO_CODES` from `Module` into the `pyodide`
Javascript API so they can be used with BrowserFS.
Currently the following code fails:
```py
from js import eval
eval("Object.create(null)")
```
with:
```py
Traceback (most recent call last):
File "<console>", line 1, in <module>
JsException: TypeError: Cannot read properties of undefined (reading 'name')
```
This fixes it.
This switches to using the file system to store the information about packages
and using importlib.metadata to retrieve it.
This has two related benefits:
We don't have to separately maintain our own separate state about what
we've installed.
We are guaranteed to agree with the Python stdlib about whether or not a
package is installed, and if so which version is installed. Since the state is
maintained in only one place, there is no chance of it diverging.
According to the packaging specs, if the package is from a url we should put
that into a file here:
packaging.python.org/en/latest/specifications/direct-url/#direct-url
Other than that, we should set the INSTALLER to pyodide.loadPackage
if the package was loaded with pyodide.loadPackage either directly or
indirectly via micropip. Otherwise set INSTALLER to micropip. That
way we can maintain the current behavior of micropip.list:
if direct_url.json is present, then source is the url in direct_url.json
otherwise, set source to pyodide if INSTALLER is pyodide.loadPackage
and set it to micropip if INSTALLER is micropip.
Oddly enough, the packaging specs suggest no place to put the source
index, so it seems that according to the specs if a package was installed
by name from some custom index, this info should not be recorded.
This PR does the following to complete #2431:
* Calculate the Subresource integrity hash from the sha256 as explained here:
https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity
As the document explains, it is a base64 encoded string of the "binary" hash,
not the data returned by hexdigest() method.
* Implement the verification of the checksum when loading packages
This ignores the verification for NodeJS environment as the node-fetch module
doesn't support the integrity option. Node 17.5.0 has added the fetch API as
experimental, it would be prudent to come back and fix this when we are ready
to use that version.