In JavaScript when accessing an attribute such as `obj.attr` a value of
`undefined` is returned if the attribute does not exist. This is unlike
Python semantics where an `AttributeError` is raised. Furthermore, in some
cases in JavaScript (eg a Proxy instance) `attr in obj` can return false
yet `obj.attr` is still valid and returns something other than `undefined`.
So the source of truth for whether a JavaScript attribute exists is to just
right away attempt `obj.attr`.
To more closely match these JavaScript semantics when proxying a JavaScript
object through to Python, change the attribute lookup logic on a `JsProxy`
so that it immediately attempts `obj.attr` instead of first testing if the
attribute exists via `attr in obj`.
This allows JavaScript objects which dynamically create attributes to work
correctly on the Python side, with both `obj.attr` and `obj["attr"]`. Note
that `obj["attr"]` already works in all cases because it immediately does
the subscript access without first testing if the attribute exists.
As a benefit, this new behaviour matches the Pyodide behaviour.
Signed-off-by: Damien George <damien@micropython.org>
Following how esp32 has been reworked, each variant now has a corresponding
`mpconfigvariant_VARIANT.mk` file associated with it. The base variant
also has a `mpconfigvariant.mk` file because it has options that none of
the other variants use.
Signed-off-by: Damien George <damien@micropython.org>
This commit reworks board variants on the esp32 port. It's a simple change
that moves the board variant configuration from an "if" statement within
`mpconfigboard.cmake` into separate files for each variant, with the name
of the variant encoded in the filename: `mpconfigvariant_VARIANT.cmake`.
Optionally, the base variant can have its own options in
`mpconfigvariant.cmake` (this is an optional file, but all other variants
of the base must have a corresponding mpconfigvariant file).
There are two benefits to this:
- The build system now gives an error if the variant that you specified
doesn't exist (because the mpconfigvariant file must exist with the
variant name you specify).
- No more error-prone if-logic needed in the .cmake files.
The way to build a variant is unchanged, still via:
$ make BOARD_VARIANT=VARIANT
Signed-off-by: Damien George <damien@micropython.org>
Compiling using arm-none-eabi-gcc 14.1.0 with -O2 will give warnings about
possible overflow indexing extint arrays, such as `pyb_extint_callback`.
This is due to `machine_pin_obj_t.pin` having a bit-width of 5, and so a
possible value up to 31, which is usually larger than
`PYB_EXTI_NUM_VECTORS`.
To fix this, change `machine_pin_obj_t.pin` to a bit-width of 4. Only 4
bits are needed for ST MCUs, which have up to 16 pins per port.
Signed-off-by: Damien George <damien@micropython.org>
Follow-up to a84c7a0ed9, this commit works most of the time but has an
intermittent bug where USB doesn't resume as expected after waking from
light sleep.
Turns out waking calls clocks_init() which will re-initialise the USB PLL.
Most of the time this is OK but occasionally it seems like the clock
glitches the USB peripheral and it stops working until the next hard reset.
Adds a machine.lightsleep() test that consistently hangs in the first
two dozen iterations on rp2 without this fix. Passed over 100 times in a
row with this fix.
The test is currently rp2-only as it seems similar lightsleep USB issues
exist on other ports (both pyboard and ESP32-S3 native USB don't send any
data to the host after waking, until they receive something from the host
first.)
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
Adapts pico-sdk clocks_init() into clocks_init_optional_usb() which takes
an argument to initialise USB clocks or not.
To avoid a code size increase the SDK clocks_init() function is linker
wrapped to become clocks_init_optional_usb(true).
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
mp_thread_begin_atomic_section() is expected to be recursive (i.e. for
nested machine.disable_irq() calls, or if Python code calls disable_irq()
and then the Python runtime calls mp_handle_pending() which also enters an
atomic section to check the scheduler state).
On rp2 when not using core1 the atomic sections are recursive.
However when core1 was active (i.e. _thread) then there was a bug that
caused the core to live-lock if an atomic section recursed.
Adds a test case specifically for mutual exclusion and recursive atomic
sections when using two threads. Without this fix the test immediately
hangs on rp2.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
This turns on the native RV32IMC code generator for the QEMU-based
RISC-V port, and removes tests that relies on native code generation
from the exclusion list (ie enables these tests).
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
This allows increasing the Python recursion depth if needed.
Also increase the default to 2k words. There is enough RAM in the
browser/node context for this to be increased, and having a larger pystack
allows more complex code to run without hitting the limit.
Signed-off-by: Damien George <damien@micropython.org>
In the webassembly port there is no asyncio run loop running at the top
level. Instead the Python asyncio run loop is scheduled through setTimeout
and run by the outer JavaScript event loop. Because tasks can become
runable from an external (to Python) event (eg a JavaScript callback), the
run loop must be scheduled whenever a task is pushed to the asyncio task
queue, otherwise tasks may be waiting forever on the queue.
Signed-off-by: Damien George <damien@micropython.org>
This change allows doing a top-level await on an asyncio primitive like
Task and Event.
This feature enables a better interaction and synchronisation between
JavaScript and Python, because `api.runPythonAsync` can now be used (called
from JavaScript) to await on the completion of asyncio primitives.
Signed-off-by: Damien George <damien@micropython.org>
Unlike most other Zephyr libraries, libkernel.a is not built as a
whole-archive.
This change also fixes a linker error observed on nucleo_wb55rg while
preparing an upgrade to Zephyr v3.5.0, caused by an undefined reference to
`z_impl_k_busy_wait`.
Signed-off-by: Maureen Helm <maureen.helm@analog.com>
This adds a QEMU-based bare metal RISC-V 32 bits port. For the time being
only QEMU's "virt" 32 bits board is supported, using the ilp32 ABI and the
RV32IMC architecture.
The top-level README and the run-tests.py files are updated for this new
port.
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
All of these files are first-party code written from scratch as part of
this repository, and were added when the top-level MIT license was active,
so they have an MIT license by default. Tracing back the git history show
the original authors/source/copyright as follows:
- main.c, mpconfigport.h: copied from the bare-arm port [1].
- test_main.c: added in [2].
- mphalport.h: added in [3] then updated in [4].
- mps2.ld, nrf51.ld, stm32.ld, uart.h: added in [4].
- imx6.ld, uart.c, startup.c: added in [4] and updated in [5].
[1] Commit c557215822 in 2014, the initial
bare-arm port; see related ee857853d6.
[2] Commit c1c32d65af in 2014, initial
qemu-arm CI tests.
[3] Commit b0a15aa735 in 2016, enabling
extmods and their tests.
[4] Commit e7332b0584 in 2018, big refactor.
[5] Commit b84406f313 in 2021, adding
Cortex-A9 support.
Signed-off-by: Damien George <damien@micropython.org>
Currently, `cyw43_delay_ms()` calls `mp_hal_delay_ms()` which uses PendSV
to set up a timer and wait for an interrupt, using wfe. But in the cyw43
initialisation stage PendSV is disabled and so this delay suspends on the
wfe instruction for an indefinite amount of time.
Work around this by changing the implementation of `cyw43_delay_ms()` to a
busy loop.
Fixes issue #15220.
Signed-off-by: Damien George <damien@micropython.org>
At startup, buffer initial stdout / MicroyPthon banner so that it can be
sent to the host on initial connection of the USB serial port. This
buffering also works for when the CDC becomes disconnected and the device
is still printing to stdout, and when CDC is reconnected the most recent
part of stdout (depending on how big the internal USB FIFO is) is flushed
to the host.
This change is most obvious when you've first plugged in a MicroPython
device (or hit reset), when it's a board that uses USB (CDC) serial in the
chip itself for the REPL interface. This doesn't apply to UART going via a
separate USB-serial chip.
The stm32 port already has this buffering behaviour (it doesn't use
TinyUSB) and this commit extends such behaviour to rp2, mimxrt, samd and
renesas-ra ports, which do use TinyUSB.
Signed-off-by: Andrew Leech <andrew@alelec.net>
Assuming that ${MICROPY_PORT_DIR}/boards/${MICROPY_BOARD} is equal to
${MICROPY_BOARD_DIR} is not valid, because the latter could point to a path
outside the main MicroPython repository.
Replace this path with the canonical ${MICROPY_BOARD_DIR} so that pins.csv
is correctly located when building against out-of-tree board definitions.
Additionally remove MICROPY_BOARDS_DIR to discourage similar mistakes.
Signed-off-by: Phil Howard <phil@gadgetoid.com>
Without this change going to lightsleep stops the USB peripheral clock, and
can lead to either the device going into a weird state or the host deciding
to issue a bus reset.
This change only keeps the USB peripheral clocks enabled if the USB device
is currently active and a host has configured the device. This means the
USB device continues to respond to host transfers and (presumably) will
even complete pending endpoint transfers. All other requests are NAKed
while still asleep, but the interaction with the host seems to resume
correctly on wake
Otherwise, if USB is not active or configured by a host, USB clocks are
disabled, the same as before.
With the change, one can issue a `machine.lightsleep(...)` with USB CDC
connected and the USB CDC remains connected during the sleep and resumes
when the lightsleep finishes.
Tested on a RPi Pico, the power consumption is:
- During normal idle at the REPL, about 15.3mA.
- During lightsleep, prior to this change, about 1.35mA.
- During lightsleep, with this change and USB CDC connected, about 3.7mA.
If power consumption should be as low as possible when USB is connected,
one can use `machine.USBDevice` to disable the USB before entering
lightsleep.
As discussed at https://github.com/orgs/micropython/discussions/14401
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
To avoid undefined references to `mp_thread_begin_atomic_section()` /
`mp_thread_end_atomic_section()`, replace them with the
`MICROPY_BEGIN_ATOMIC_SECTION` / `MICROPY_END_ATOMIC_SECTION`
macros. That way, it's possible to build again with `MICROPY_PY_THREAD`
disabled (made possible by efa54c27b9).
Fixes commit 19844b4983.
Signed-off-by: Matthias Blankertz <matthias@blankertz.org>
This fixes the build for some esp32 and nrf boards (for example
`ARDUINO_NANO_33_BLE_SENSE` and `ARDUINO_NANO_ESP32`) due to commit
c98789a6d8. Changes are:
- Allow the CDC TX/RX functions in `mp_usbd_cdc.c` to be enabled
separately to those needed for `MICROPY_HW_USB_CDC_1200BPS_TOUCH`.
- Add `MICROPY_EXCLUDE_SHARED_TINYUSB_USBD_CDC` option as a temporary
workaround for the nrf port to use.
- Declare `mp_usbd_line_state_cb()` in a header as a public function.
- Fix warning with type cast of `.callback_line_state_changed`.
Signed-off-by: Damien George <damien@micropython.org>
There are a few TinyUSB CDC functions used for stdio that are currently
replicated across a number of ports. Not surprisingly in a couple of cases
these have started to diverge slightly, with additional features added to
one of them.
This commit consolidates a couple of key shared functions used directly by
TinyUSB based ports, and makes those functions available to all.
Signed-off-by: Andrew Leech <andrew@alelec.net>
Previously, this was subject to races incrementing/decrementing
the counter variable pendsv_lock.
Technically, all that's needed here would be to make pendsv_lock an atomic
counter.
This implementation fulfils a stronger guarantee: it also provides mutual
exclusion for the core which calls pendsv_suspend(). This is because the
current use of pendsv_suspend/resume in MicroPython is to ensure exclusive
access to softtimer data structures, and this does require mutual
exclusion.
The conceptually cleaner implementation would split the mutual exclusion
part out into a softtimer-specific spinlock, but this increases the
complexity and doesn't seem like it makes for a better implementation in
the long run.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
The best_effort_wfe_or_timeout() and sleep_us() pico-sdk functions use the
pico-sdk alarm pool internally, and that has a bug.
Some usages inside pico-sdk (notably multicore_lockout_start_blocking())
will still end up calling best_effort_wfe_or_timeout(), although usually
with "end_of_time" as the timeout value so it should avoid any alarm pool
race conditions.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
Progress towards removing pico-sdk alarm pool, due to a known issue.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
This adds support for the TCP_NODELAY socket option for lwIP sockets.
Generally, TCP sockets use the Nagle algorithm and will send data when
an ACK is received or after all previously-sent data has already been
ACKed.
If the TCP_NODELAY option is set for a socket, every write to the socket
will trigger a packet to be sent.
Signed-off-by: Jared Hancock <jared@greezybacon.me>
This allows querying the GC heap size/used/free values, as well as the
number of alive JsProxy and PyProxy objects, referenced by proxy_c_ref and
proxy_js_ref.
Signed-off-by: Damien George <damien@micropython.org>
And clear the corresponding `proxy_c_ref[c_ref]` entry when the finaliser
runs. This then allows the C side to (eventually) garbage collect the
corresponding Python object.
Signed-off-by: Damien George <damien@micropython.org>
And clear the corresponding `proxy_js_ref[js_ref]` entry when the finaliser
runs. This then allows the JavaScript side to (eventually) free the
corresponding JavaScript object.
Signed-off-by: Damien George <damien@micropython.org>
So it's possible to know when an external C function is being called at the
top-level, eg by JavaScript without any intermediate C->JS->C calls.
Signed-off-by: Damien George <damien@micropython.org>
Instead of raising KeyError. These semantics match JavaScript behaviour
and make it much more seamless to pass Python dicts through to JavaScript
as though they were JavaScript {} objects.
Signed-off-by: Damien George <damien@micropython.org>