Assuming that ${MICROPY_PORT_DIR}/boards/${MICROPY_BOARD} is equal to
${MICROPY_BOARD_DIR} is not valid, because the latter could point to a path
outside the main MicroPython repository.
Replace this path with the canonical ${MICROPY_BOARD_DIR} so that pins.csv
is correctly located when building against out-of-tree board definitions.
Additionally remove MICROPY_BOARDS_DIR to discourage similar mistakes.
Signed-off-by: Phil Howard <phil@gadgetoid.com>
Without this change going to lightsleep stops the USB peripheral clock, and
can lead to either the device going into a weird state or the host deciding
to issue a bus reset.
This change only keeps the USB peripheral clocks enabled if the USB device
is currently active and a host has configured the device. This means the
USB device continues to respond to host transfers and (presumably) will
even complete pending endpoint transfers. All other requests are NAKed
while still asleep, but the interaction with the host seems to resume
correctly on wake
Otherwise, if USB is not active or configured by a host, USB clocks are
disabled, the same as before.
With the change, one can issue a `machine.lightsleep(...)` with USB CDC
connected and the USB CDC remains connected during the sleep and resumes
when the lightsleep finishes.
Tested on a RPi Pico, the power consumption is:
- During normal idle at the REPL, about 15.3mA.
- During lightsleep, prior to this change, about 1.35mA.
- During lightsleep, with this change and USB CDC connected, about 3.7mA.
If power consumption should be as low as possible when USB is connected,
one can use `machine.USBDevice` to disable the USB before entering
lightsleep.
As discussed at https://github.com/orgs/micropython/discussions/14401
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
To avoid undefined references to `mp_thread_begin_atomic_section()` /
`mp_thread_end_atomic_section()`, replace them with the
`MICROPY_BEGIN_ATOMIC_SECTION` / `MICROPY_END_ATOMIC_SECTION`
macros. That way, it's possible to build again with `MICROPY_PY_THREAD`
disabled (made possible by efa54c27b9).
Fixes commit 19844b4983.
Signed-off-by: Matthias Blankertz <matthias@blankertz.org>
This fixes the build for some esp32 and nrf boards (for example
`ARDUINO_NANO_33_BLE_SENSE` and `ARDUINO_NANO_ESP32`) due to commit
c98789a6d8. Changes are:
- Allow the CDC TX/RX functions in `mp_usbd_cdc.c` to be enabled
separately to those needed for `MICROPY_HW_USB_CDC_1200BPS_TOUCH`.
- Add `MICROPY_EXCLUDE_SHARED_TINYUSB_USBD_CDC` option as a temporary
workaround for the nrf port to use.
- Declare `mp_usbd_line_state_cb()` in a header as a public function.
- Fix warning with type cast of `.callback_line_state_changed`.
Signed-off-by: Damien George <damien@micropython.org>
There are a few TinyUSB CDC functions used for stdio that are currently
replicated across a number of ports. Not surprisingly in a couple of cases
these have started to diverge slightly, with additional features added to
one of them.
This commit consolidates a couple of key shared functions used directly by
TinyUSB based ports, and makes those functions available to all.
Signed-off-by: Andrew Leech <andrew@alelec.net>
Previously, this was subject to races incrementing/decrementing
the counter variable pendsv_lock.
Technically, all that's needed here would be to make pendsv_lock an atomic
counter.
This implementation fulfils a stronger guarantee: it also provides mutual
exclusion for the core which calls pendsv_suspend(). This is because the
current use of pendsv_suspend/resume in MicroPython is to ensure exclusive
access to softtimer data structures, and this does require mutual
exclusion.
The conceptually cleaner implementation would split the mutual exclusion
part out into a softtimer-specific spinlock, but this increases the
complexity and doesn't seem like it makes for a better implementation in
the long run.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
The best_effort_wfe_or_timeout() and sleep_us() pico-sdk functions use the
pico-sdk alarm pool internally, and that has a bug.
Some usages inside pico-sdk (notably multicore_lockout_start_blocking())
will still end up calling best_effort_wfe_or_timeout(), although usually
with "end_of_time" as the timeout value so it should avoid any alarm pool
race conditions.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
Progress towards removing pico-sdk alarm pool, due to a known issue.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
This adds support for the TCP_NODELAY socket option for lwIP sockets.
Generally, TCP sockets use the Nagle algorithm and will send data when
an ACK is received or after all previously-sent data has already been
ACKed.
If the TCP_NODELAY option is set for a socket, every write to the socket
will trigger a packet to be sent.
Signed-off-by: Jared Hancock <jared@greezybacon.me>
This allows querying the GC heap size/used/free values, as well as the
number of alive JsProxy and PyProxy objects, referenced by proxy_c_ref and
proxy_js_ref.
Signed-off-by: Damien George <damien@micropython.org>
And clear the corresponding `proxy_c_ref[c_ref]` entry when the finaliser
runs. This then allows the C side to (eventually) garbage collect the
corresponding Python object.
Signed-off-by: Damien George <damien@micropython.org>
And clear the corresponding `proxy_js_ref[js_ref]` entry when the finaliser
runs. This then allows the JavaScript side to (eventually) free the
corresponding JavaScript object.
Signed-off-by: Damien George <damien@micropython.org>
So it's possible to know when an external C function is being called at the
top-level, eg by JavaScript without any intermediate C->JS->C calls.
Signed-off-by: Damien George <damien@micropython.org>
Instead of raising KeyError. These semantics match JavaScript behaviour
and make it much more seamless to pass Python dicts through to JavaScript
as though they were JavaScript {} objects.
Signed-off-by: Damien George <damien@micropython.org>
This adds a new undefined singleton to Python, that corresponds directly to
JavaScript `undefined`. It's accessible via `js.undefined`.
Signed-off-by: Damien George <damien@micropython.org>
This reverts part of commit fa23e4b093, to
make it so that Python `None` converts to JavaScript `null` (and JavaScript
`null` already converts to Python `None`). That's consistent with how the
`json` module converts these values back and forth.
Signed-off-by: Damien George <damien@micropython.org>
When a fatal error occurs it's important to know which precise version it
occurred on in order to be able to decode the crash dump information such
as the backtrace.
By wrapping around the built-in IDF panic handler we can print some extra
information whenever a fatal error occurs. The message links to a new wiki
page which contains additional information on how to debug ESP32 issues,
and links to the bug reporting issue template.
Signed-off-by: Daniël van de Giessen <daniel@dvdgiessen.nl>
Fixes automatic baudrate calculation results.
Default clock source on this SoC is HSE not PCLK1. We could fix this by
switching to PCLK1 instead, but two extra complications:
- PCLK1 on this board is a 42.5MHz and the Pyboard CAN sample_point
calculation requires an exact match, which is harder to hit with this
source frequency.
- Would be a breaking change for any existing Python code on this board,
i.e. specifying brp, bs1, bs2 to initialise CAN.
In the future it might be worth looking switching to the PLL source on this
SoC instead, as this is a much higher frequency that would give higher
quality BRS bitrate matches (probably too high without using the second
divider going into the CAN peripheral though, so more code changes needed
also).
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
And change Py None conversion so it converts to JS undefined.
The semantics for conversion of these objects are then:
- Python None -> JavaScript undefined
- JavaScript undefined -> Python None
- JavaScript null -> Python None
This follows Pyodide:
https://pyodide.org/en/stable/usage/type-conversions.html
Signed-off-by: Damien George <damien@micropython.org>
This commit defines a new `JsException` exception type which is used on the
Python side to wrap JavaScript errors. That's then used when a JavaScript
Promise is rejected, and the reason is then converted to a `JsException`
for the Python side to handle.
This new exception is exposed as `jsffi.JsException`.
Signed-off-by: Damien George <damien@micropython.org>
For ESP32C3/S2/S3 IDFv5 exposes new internal temperature API which is
different to the base ESP32, IDFv4.
Thanks to @robert-hh for cleaner code and testing sensor capability in
these devices.
See discussion #10443.
Signed-off-by: Rick Sorensen <rick.sorensen@gmail.com>
JavaScript semantics are such that the caller of an async function does not
need to await that function for it to run to completion. This commit makes
that behaviour also apply to top-level async Python code run via
`runPythonAsync()`.
Signed-off-by: Damien George <damien@micropython.org>
The `reason` in a rejected promise should be an instance of `Error`. That
leads to better error messages on the JavaScript side.
Signed-off-by: Damien George <damien@micropython.org>
Different MCUs have different requirements for the minimum number of bytes
that can be written to internal flash.
Signed-off-by: Damien George <damien@micropython.org>
The calculations `num_word32 / 4` and `num_word32 / 8` were rounding down
the number of words to program to flash, and therefore possibly truncating
the data (eg mboot could miss writing the final few words of the firmware).
That's fixed in this commit by adding extra logic to program any remaining
words. And the logic for H5 and H7 is combined.
Signed-off-by: Damien George <damien@micropython.org>
In case there is a power failure after during this operation, the key must
be the last thing that is written, to indicate valid data.
Signed-off-by: Damien George <damien@micropython.org>
This allows a simple way to run the existing asyncio tests under the
webassembly port, which doesn't support `asyncio.run()`.
Signed-off-by: Damien George <damien@micropython.org>
This commit adds a significant portion of the existing MicroPython asyncio
module to the webassembly port, using parts of the existing asyncio code
and some custom JavaScript parts.
The key difference to the standard asyncio is that this version uses the
JavaScript runtime to do the actual scheduling and waiting on events, eg
Promise fulfillment, timeouts, fetching URLs.
This implementation does not include asyncio.run(). Instead one just uses
asyncio.create_task(..) to start tasks and then returns to the JavaScript.
Then JavaScript will run the tasks.
The implementation here tries to reuse as much existing asyncio code as
possible, and gets all the semantics correct for things like cancellation
and asyncio.wait_for. An alternative approach would reimplement Task,
Event, etc using JavaScript Promise's. That approach is very difficult to
get right when trying to implement cancellation (because it's not possible
to cancel a JavaScript Promise).
Signed-off-by: Damien George <damien@micropython.org>