pluggy >=1.1 makes really annoying warnings if the original wrapper hooks raise errors
telling us to use the new style wrapper. This follows their advice to get rid of the warnings.
I also reordered prettier to come last since it's the slowest. I switched from
using black to ruff-format which says its defaults are nearly the same as black.
Before this PR, `pyimport` can be used like: `pyimport("package")` or `pyimport("package.module")` but `pyimport("package.attribute")` fails. This updates `pyimport` to also work to get package attributes.
I also updated the docs for pyimport.
This uses pyodide/pytest-pyodide#117 to run doctests in
Pyodide. I also turned on and fixed various doctests that were not working for
unrelated reasons
This is close to finishing the refactor. I removed the last few uses of the
Hiwire JS APIs from the PyProxy Buffer apis. I reworked all of the
JsProxy_create family to use JsVal everywhere. I removed most of the remaining
logic from hiwire.c. The only remaining places where JsRef is used are in struct
fields where it is needed.
Uses the JS Promise integration stack switching API to allow blocking for JavaScript promises and
`PyodideFuture` objects. It's a bit complicated...
This doesn't include support for reentrant switching, currently doing that will corrupt the Python VM.
Uses the JS Promise integration stack switching API to allow blocking for JavaScript promises and
`PyodideFuture` objects. It's a bit complicated...
This doesn't include support for reentrant switching, currently doing that will corrupt the Python VM.
This fixes a number problems with the old stream handling:
1. Not possible to set a custom errno (necessary for proper interrupt
handling and possibly for other things)
2. Inefficient: in a lot of cases we have data in one buffer and we need
it placed into a different buffer, but we have to implement a function
that gets one byte out of the source buffer and then call it repeatedly
to move one byte at a time to the target buffer.
3. Ease of implementation: in many cases we already have perfectly good
buffer manipulation APIs, so if we have direct access to the true source
or target buffer we can just use these. See: the node IO code, which got
much simpler.
This is backwards compatible, so you can still use the old input mechanism
or use buffered or raw output. But it adds a new method of directly implementing
read/write. For simplicity, we insure that the source/destination buffers are
always `Uint8Array` views that point to exactly the region that is meant to be
read/written.
The old mechanisms are faster than before and can correctly support keyboard
interrupts. Other than that I think the original behavior is unchanged. I added a
lot more test coverage to ensure backwards compatibility since there was pretty
anemic coverage before.
I think the read/write APIs are mostly pretty simple to use, with the exception
that someone might forget to return the number of bytes read. JavaScript's ordinary
behavior coerces the `undefined` to a 0, which leads to an infinite loop where the
filesystem repeatedly asks to read/write the same data since it sees no progress.
I added a check that writes an error message to the console and sets EIO when undefined
is returned so the infinite loop is prevented and the problem is explained.
We've received user feedback recently that Python => JS calls are a bit slow when
the arguments are proxied (from @DerThorsten and @laffra). Profiling showed
that 90% of the time was being spent in destroying the pyproxies. The line profiles
showed that every time we accessed `pyproxy.$$` it was very slow. The problem is
that these accesses go through a slow path in the `PyProxy` traps for `get`. Switching
to using a `Symbol` for access hits a fast path for the `get` trap.
This patch was enough to bring the time spent in destroy_proxy down from 90% to 6%
of the runtime for the following profile:
```py
from pyodide.ffi import create_proxy
from pyodide.code import run_js
from timeit import timeit
n = 100000
jsf = run_js("() => {}")
a = object()
timeit("jsf(a, a, a, a)", number=n, globals={"jsf": jsf, "a": a}) / n * 1e6
```
This PR splits package test in CI so that no-numpy-dependents packages can be tested earlier. In detail, pytest will now save test results into its cache directory, and if --skip-passed option is given, it will skip previously successful tests.
Various improvements to `test_micropip`. The main feature is a new fixture
`mock_fetch` with an `add_pkg` method that takes a package name, a map
from version to requirements, and a choice of platform. This should
hopefully make writing more tests a lot easier (which is good because
we could use more micropip test coverage but we are limited by the
difficulty of writing good tests).
This also adds a fixture to create distinct dummy package names and
enables `@pytest.mark.asyncio` to handle the async calls rather than
using `asyncio.get_event_loop.run_until_complete`.
* chore: add some incomplete types
* chore: modernize pyproject.toml
Adding more incomplete types. About 2/3 of the way through being
able to turn on the strictness flag for it.
* Add opencv-python
* Update comment
* Add JPEG, PNG, WEBP, ZLIB support
* Add tests for image processing
* Add more core modules
* Disable opencl
* Replace lena with baboon and add more tests
* Add file system support
* Add ffmpeg
* Add more tests
* Disable pthread in ffmpeg
* Disable canonical input processing mode in node test
* Update changelog
* Remove import test
* Allow more time in the first test
* Split out libwebp
* Fix node test
* Use a seperate CI job for opencv-python
* Fix generator
* Update changelog
* Remove protobuf package
* Try to fix CI workspace conflict
* Fix CI
* Use another CI job for generating unified packages.json
* Try to fix CI
* Fix CI again
* Disable verbose build
* Prevent from building opencv-python twice
* Persist only build artifacts
* Sepearate Cmake args into a script
* Try to reuse build packages job
* Fix CI
* Fix typo
* Fix merge conflict
* Use large resource class for package build
* Do not upload unwanted artifacts
* Do not upload unwanted artifacts
Various fixes to improve the npm package.
Switch to publishing the dist folder, fix indexURL for node,
fix node commonJS import and ES6 import only publish core
Pyodide interpreter to npm, download and cache other wheels
on first use.
dist is both more accurate (the 'build' directory is normally where you do the build,
and normally consists of intermediate build artifacts no one cares about). dist also
occurs less frequently in the code base: after this change \bbuild\b has 466 matches,
whereas \bdist\b has 101 matches. build has 1072 matches whereas dist has 362.
All libffi tests pass now. The only failing ctypes test is test_callback_too_many_args which doesn't segfault anymore, it only soft fails. Planning to submit a PR to cpython that fixes test_callback_too_many_args.
See also:
bugs.python.org/issue47208
https://github.com/emscripten-core/emscripten/pull/16658
When I added unpack_buffer_archive, in code review people said
it was redundant with unpack_buffer and they should be merged.
I said merging was too annoying. They were right. This merges
the functions together into a function with a 38 line docstring
and a 18 line implementation.
If no indexURL is provided, we throw and catch an error and
then use ErrorStackParser to calculate where pyodide.js has
been loaded from. Resolves#2290.
Question: But getting the URL from error stack trace is well... really
hacky. Can't we use
[`document.currentScript`](https://developer.mozilla.org/en-US/docs/Web/API/Document/currentScript)
or
[`import.meta.url`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import.meta)
instead?
Answer: `document.currentScript` works for the browser main thread.
`import.meta` works for es6 modules. In a classic webworker, I think there
is no approach that works. Also we would need some third approach for node
when loading a commonjs module using `require`. On the other hand, this
stack trace approach works for every case without any feature detection
code.