Makes `python2js` do what `python2js_val` did and remove `python2js_val`.
Also, when accessing a `JsProxy` attribute invokes a getter and the getter
throws an error, fix it so that the error is propagated instead of being turned
into an `AttributeError`.
Uses the JS Promise integration stack switching API to allow blocking for JavaScript promises and
`PyodideFuture` objects. It's a bit complicated...
This doesn't include support for reentrant switching, currently doing that will corrupt the Python VM.
ZenGL provides OpenGL binding for Python.
The same code that runs natively also runs in the browser as-is without modifications.
It does not depend on SDL, emscripten GLES or anything else.
It binds directly to WebGL2.
This makes IDEs, documentation generation tools, and linters handle our
generated files better. I set the project root to be `src` instead of `src/js`
so that it is allowed to import files directly from `src/core`. This way we
don't have to copy `error_handling.ts` we can just import
`../core/error_handling`.
I made a new folder called `src/js/generated` to place generated files into and
added typescript resolution rules so that when we import a file called
"generated/blah" we first look for `blah` in `src/js/generated` and then fall
back to a file called `blah` in `src/core`.
This also allows us to move around fewer files when building the docs
and in the makefile.
Uses the JS Promise integration stack switching API to allow blocking for JavaScript promises and
`PyodideFuture` objects. It's a bit complicated...
This doesn't include support for reentrant switching, currently doing that will corrupt the Python VM.
This PR adds the ContourPy package, which is a C++ extension that is a dependency
of Matplotlib >= 3.6.0 and Bokeh >= 3.0.0. Note that this is the previous version (1.0.7)
of ContourPy, not the latest one (1.1.0) that uses Meson as I am still trying to get that
to build with the correct compiler flags.
Profiling shows that `PyObject_IsInstance` is pretty expensive. This
skips it when conditions we've already measured imply that they will
return false anyways.
This gives a 33% performance improvement on the following:
```js
pyodide.runPython(`
from pyodide.code import run_js
from timeit import timeit
f = run_js("() => {}")
d = {}
n = 200000
timeit("f(d)", number=n, globals={"f": f, "d": d}) / n * 1e6
`);
```
We've received feedback from users that use other requests APIs that they expect
the method to be called `response.text` instead of `response.string`. Indeed both
the Fetch response API and the Python requests library use this convention:
https://developer.mozilla.org/en-US/docs/Web/API/Response/texthttps://requests.readthedocs.io/en/latest/api/#requests.Response.text
This adds `response.text` to `FetchResponse`. It is a synonym for `response.string`.
This also marks `response.string` as deprecated but does not schedule it for removal.
This fixes a number problems with the old stream handling:
1. Not possible to set a custom errno (necessary for proper interrupt
handling and possibly for other things)
2. Inefficient: in a lot of cases we have data in one buffer and we need
it placed into a different buffer, but we have to implement a function
that gets one byte out of the source buffer and then call it repeatedly
to move one byte at a time to the target buffer.
3. Ease of implementation: in many cases we already have perfectly good
buffer manipulation APIs, so if we have direct access to the true source
or target buffer we can just use these. See: the node IO code, which got
much simpler.
This is backwards compatible, so you can still use the old input mechanism
or use buffered or raw output. But it adds a new method of directly implementing
read/write. For simplicity, we insure that the source/destination buffers are
always `Uint8Array` views that point to exactly the region that is meant to be
read/written.
The old mechanisms are faster than before and can correctly support keyboard
interrupts. Other than that I think the original behavior is unchanged. I added a
lot more test coverage to ensure backwards compatibility since there was pretty
anemic coverage before.
I think the read/write APIs are mostly pretty simple to use, with the exception
that someone might forget to return the number of bytes read. JavaScript's ordinary
behavior coerces the `undefined` to a 0, which leads to an infinite loop where the
filesystem repeatedly asks to read/write the same data since it sees no progress.
I added a check that writes an error message to the console and sets EIO when undefined
is returned so the infinite loop is prevented and the problem is explained.