This PR adds the ContourPy package, which is a C++ extension that is a dependency
of Matplotlib >= 3.6.0 and Bokeh >= 3.0.0. Note that this is the previous version (1.0.7)
of ContourPy, not the latest one (1.1.0) that uses Meson as I am still trying to get that
to build with the correct compiler flags.
Profiling shows that `PyObject_IsInstance` is pretty expensive. This
skips it when conditions we've already measured imply that they will
return false anyways.
This gives a 33% performance improvement on the following:
```js
pyodide.runPython(`
from pyodide.code import run_js
from timeit import timeit
f = run_js("() => {}")
d = {}
n = 200000
timeit("f(d)", number=n, globals={"f": f, "d": d}) / n * 1e6
`);
```
We've received feedback from users that use other requests APIs that they expect
the method to be called `response.text` instead of `response.string`. Indeed both
the Fetch response API and the Python requests library use this convention:
https://developer.mozilla.org/en-US/docs/Web/API/Response/texthttps://requests.readthedocs.io/en/latest/api/#requests.Response.text
This adds `response.text` to `FetchResponse`. It is a synonym for `response.string`.
This also marks `response.string` as deprecated but does not schedule it for removal.
This fixes a number problems with the old stream handling:
1. Not possible to set a custom errno (necessary for proper interrupt
handling and possibly for other things)
2. Inefficient: in a lot of cases we have data in one buffer and we need
it placed into a different buffer, but we have to implement a function
that gets one byte out of the source buffer and then call it repeatedly
to move one byte at a time to the target buffer.
3. Ease of implementation: in many cases we already have perfectly good
buffer manipulation APIs, so if we have direct access to the true source
or target buffer we can just use these. See: the node IO code, which got
much simpler.
This is backwards compatible, so you can still use the old input mechanism
or use buffered or raw output. But it adds a new method of directly implementing
read/write. For simplicity, we insure that the source/destination buffers are
always `Uint8Array` views that point to exactly the region that is meant to be
read/written.
The old mechanisms are faster than before and can correctly support keyboard
interrupts. Other than that I think the original behavior is unchanged. I added a
lot more test coverage to ensure backwards compatibility since there was pretty
anemic coverage before.
I think the read/write APIs are mostly pretty simple to use, with the exception
that someone might forget to return the number of bytes read. JavaScript's ordinary
behavior coerces the `undefined` to a 0, which leads to an infinite loop where the
filesystem repeatedly asks to read/write the same data since it sees no progress.
I added a check that writes an error message to the console and sets EIO when undefined
is returned so the infinite loop is prevented and the problem is explained.
Rendering the destroyed error messages for PyProxies is pretty inefficient.
This adds a setting to turn on debug mode. When debug mode is off, a cheaper
destroyed message is used instead.
A descriptor is *writable* if either writable is true or it has a setter.
A descriptor is *deletable* if it is configurable. Also, the normal JS behavior
here is to return false, not to throw.
Upgrades scipy to its latest release 1.11.1 (released 28 June 2023).
I ran the scipy test suite locally and I did not notice any problematic issues.
There are some additional test failures for some tests that were added between
scipy 1.10.1 and 1.11.1. Most are due to Pyodide limitations. There is also
`scipy.stats.tests.test_multivariate` `test_cdf_against_generic_integrators`
that seems to indicate that `scipy.integrate.tplquad` is not converging but that
seems to be the case in scipy 1.10.1 too.
The previous logic raised an OSError if a response returned a status code of 400
or greater but it is useful to be able to retrieve the bodies from such
responses as they often contain additional information about the error
This change allows users to customize where wheels are cached in Node.
This is important in the context of docker images and other cases where
the `node_modules` folder isn't writable.
Resolves#3924. Some code checks whether an object is callable with `instanceof Function`.
The correct way to check for callability is with `typeof x === "function"`, but ideally we want
to work with existing code that uses a less accurate callability check.
Make a copy of the PyProxy class which is a subclass of Function.
Since the difference between `PyProxy` and `PyProxyFunction` is an implementation
detail, we use `[Symbol.hasInstance]` to make them recognize the same set of objects
as their instances.
we also need some extra logic to filter out additional attributes that come from the
`Function` prototype.
Prior to this commit:
```js
pyodide.registerJsModule("xx", {a: 2});
pyodide.runPython("from xx import *");
```
raises:
```
ImportError: from-import-* object has no __dict__ and no __all__
```
Afterwards, it behaves as expected (a variable called "a" is introduced into globals equal to 2).
This also updates the command line runner to pass in all ambient environment
variables except that `HOME` is set to the working directory.
`homedir` is now deprecated. I added handling for `homedir` and `env.HOME`:
if both are passed, raise an error, otherwise they mean the same thing.
Prior to this PR, internal Python code paths that use `PySequence_*` methods
directly would fail on JS Arrays. To fix this, we implement `sq_length`, `sq_item`
and (for mutable sequences) `sq_ass_item.` I also added implementations for
the rest of the sequence methods `sq_concat` and `sq_repeat`. Strangely
`sq_inplace_concat` already existed.