Most MCUs apart from Cortex-M0 with Thumb 1 have an instruction
for computing the "high part" of a multiplication (e.g., the upper
32 bits of a 32x32 multiply).
When they do, gcc uses this to implement a small and fast
overflow check using the __builtin_mul_overflow intrinsic, which
is preferable to the guard division method previously used in smallint.c.
However, in contrast to the previous mp_small_int_mul_overflow
routine, which checks that the result fits not only within mp_int_t
but is SMALL_INT_FITS(), __builtin_mul_overflow only checks for
overflow of the C type. As a result, a slight change in the code
flow is needed for MP_BINARY_OP_MULTIPLY.
Other sites using mp_small_int_mul_overflow already had the
result value flow through to a SMALL_INT_FITS check so they didn't
need any additional changes.
Do similarly for the _ll and _ull multiply overflows checks.
Signed-off-by: Jeff Epler <jepler@gmail.com>
The problem with ESP board spurious reset happens at disconnect time on
Windows (clearing DTR before RTS triggers a reset).
Previous workarounds tried to detect possible ESP boards and apply the
correct DTR and RTS settings when opening the port.
Instead, we can manually clear RTS before closing the port and thereby
avoid the reset issue. Opening the port can keep the default behaviour
(RTS & DTR both set).
close() is called from a finally block in the mpremote main module
(via do_disconnect()) - so this should always happen provided the Python
process isn't terminated by the OS.
One additional workaround is needed to prevent a spurious reset first time
a Silicon Labs CP210x-based ESP board is opened by mpremote after
enumeration.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
Prior to this fix, if a JavaScript thenable/Promise that was part of an
asyncio chain was rejected it would be ignored because the Python-side
`ThenableEvent` did not register a handler for the rejection.
That's fixed by this commit, and a corresponding test added.
Signed-off-by: Damien George <damien@micropython.org>
`cur_task` can never be `None` in the webassembly port, so test it for the
top-level task to see if an asyncio Task is active or not.
This fixes a bug where await'ing on a JavaScript awaitable that ends up
raising an error would not be caught on the Python side. The fix here
makes sure it is caught by Python, as tested by the new test.
Signed-off-by: Damien George <damien@micropython.org>
This tests `from mod import foo` where `mod` is a module registered using
the webassembly API `registerJsModule(mod)`, and where `foo` is a
JavaScript function. Prior to the parent commit, this would fail.
Signed-off-by: Damien George <damien@micropython.org>
This change follows CPython behaviour, allowing use of:
from instance import method
to import a bound method from a class instance, eg registered via
setting `sys.modules["instance"] = instance`.
Admittedly this is probably a very rarely used pattern in Python, but it
resolves a long standing comment about whether or not this is actually
possible (it turns out it is possible!). A test is added to show how it
works.
The main reason for this change is to fix a problem with imports in the
webassembly port: prior to this fix, it was not possible to do `from
js_module import function`, where `js_module` is a JavaScript object
registered to be visible to Python through the webassembly API function
`registerJsModule(js_module)`. But now with this fix that is possible.
Signed-off-by: Damien George <damien@micropython.org>
That is, an object whose type defines the protocol slot.
Note that due to protocol confusion, a variant of the original crasher that
returned e.g., a machine.Pin instance could still lead to a crash (#17852).
Fixes issue #17841.
Signed-off-by: Jeff Epler <jepler@gmail.com>
Signed-off-by: Jeff Epler <jepler@unpythonic.net>
It's frequently the case that a developer will want to compare the object
code size of various alternatives. When this can be done at the single
object code level, the turnaround is faster.
Provide a rule `$(BUILD)/%.sz` to print the size of a given object.
Because it is a normal Makefile target that depends on an object file, it
rebuilds the object file if needed.
Signed-off-by: Jeff Epler <jepler@unpythonic.net>
With the parent commit implementing proper identities, this equality check
option is no longer needed.
Signed-off-by: Damien George <damien@micropython.org>
Commit ffa98cb014 improved equality for
`JsProxy` objects so that, eg, `js.Object == js.Object` is true.
As mentioned in #17758, a further optimisation is to make identity work in
that case, eg `js.Object is js.Object` should be true (on the Python side).
This commit implements that, by keeping track of all `JsProxy` Python
objects and reusing them where possible: where the underlying JS ref is
equal, ie they point to the same JS object. That reduces memory churn and
gives better identity behaviour of JS objects proxied over to Python.
As part of this, a bug is fixed where JS objects can be freed while there's
still a `JsProxy` referring to that JS object. A test is added for that
exact scenario, and the test now passes.
Signed-off-by: Damien George <damien@micropython.org>
Doing GC calls in the entry path (when JavaScript calls into MicroPython at
the top/outer level) can lead to freeing of objects which are still in use.
This is because the (JavaScript) objects are referenced in the input
arguments to the C functions and they are not yet converted to full proxy
objects and not yet tracked properly by the live-object tracker.
Signed-off-by: Damien George <damien@micropython.org>
This commit makes it explicit that the port uses the
MICROPY_CONFIG_ROM_LEVEL_CORE_FEATURES feature level, and removes config
options that are the default at this level.
This change is a no-op for the firmware.
Signed-off-by: Damien George <damien@micropython.org>
On most ports, printing an instance of machine.SPI gives something like:
>>> machine.SPI(1)
SPI(1, baudrate=328125, polarity=0, phase=0, bits=8)
This commit makes the nrf port do the same.
The reason for this change is:
- make nrf consistent with other ports
- allow the `tests/extmod/machine_spi_rate.py` to run on the nrf port (this
tests parses the output of str(spi) to get the actual baudrate)
Signed-off-by: Damien George <damien@micropython.org>
The magnitude of `range()` arguments is not restricted to "small" ints, but
includes "machine ints" which fit inside a register but can only be
represented as "long integer" objects in Python.
Signed-off-by: Jeff Epler <jepler@gmail.com>
And expand the test for `readinto()` to test the difference between trying
to read the requested amount by doing multiple underlying IO calls, and
only doing one call.
Signed-off-by: Damien George <damien@micropython.org>
This commit refactors some common code in the core stream implementation,
to reduce code size while retaining the same functionality.
With the factoring, `readinto`/`readinto1` could now support an additional
4th argument (like write) but it's best not to introduce even more CPython
incompatibility, so they are left as having a maximum of 3 args.
Signed-off-by: Damien George <damien@micropython.org>
On the zephyr port, hard IRQ handlers run with a separate stack on a
different thread, so each call to mp_irq_dispatch() and mp_irq_handler()
has to be wrapped with adjustments to the stack-limit checker.
Move these adjustments into the shared mp_irq_dispatch(), introducing
MICROPY_STACK_SIZE_HARD_IRQ which a port can define to non-zero if it
uses a separate stack for hard IRQ handlers. We only need wrap the hard
dispatch case. This should reduce binary size on zephyr without affecting
other ports.
Signed-off-by: Chris Webb <chris@arachsys.com>
Update the main machine.Timer specification, and any references to
hard/soft interrupts in port-specific documentation. There is a separate
copy of the machine.Timer documentation for the pyboard, so update that
too to keep everything consistent.
Signed-off-by: Chris Webb <chris@arachsys.com>
On platforms where hardware timers are available, test these in each
combination of hard/soft and one-shot/periodic in the same way as for
software timers. Where a platform supports both software (id = -1) and
hardware (id >= 0) timers, the behaviour of both is now checked.
For now, esp8266 is the only platform that supports hardware timers and
both hard and soft callbacks.
Signed-off-by: Chris Webb <chris@arachsys.com>
Now all ports with machine.Timer except nrf support both hard and
soft callbacks, generalise tests/ports/rp2_machine_timer.py into
tests/extmod/machine_timer.py.
There is an existing machine_soft_timer.py which varies period= and
covers the nrf port but skips esp32/esp8266 because they don't support
software timers. In our new test, we try varying freq= instead of period=,
and cover esp32/esp8266 (with a fixed choice of hardware timer) but skip
nrf because it doesn't support hard= or freq=.
Add a check that the heap is locked (so allocation fails) in hard
callbacks and it is unlocked (so allocation succeeds) in soft callbacks,
to ensure we're getting the right kind of callback, not falling back to
the default.
Signed-off-by: Chris Webb <chris@arachsys.com>
machine.Timer() has inconsistent behaviour between ports: some run
callbacks in hard IRQ context whereas others schedule them like soft IRQs.
Most ports now support a hard= argument to the machine.Timer constructor
or initialiser to explicitly choose between these behaviours. However,
esp32 does not support hardware interrupts because they are not delivered
to the main thread, so the interrupt handler would need to acquire the GIL.
Raise a ValueError if hard=True is requested for esp32 machine.Timer.
Signed-off-by: Chris Webb <chris@arachsys.com>
machine.Timer() has inconsistent behaviour between ports: some run
callbacks in hard IRQ context whereas others schedule them like soft IRQs.
As on the rp2 port, add support to the esp8266 port for a hard= argument
to explicitly choose between these, setting the default to False to match
the existing behaviour. Open-code this as we don't link against mpirq.c
so can't use mp_irq_dispatch().
Signed-off-by: Chris Webb <chris@arachsys.com>
machine.Timer() has inconsistent behaviour between ports: some run
callbacks in hard IRQ context whereas others schedule them like soft IRQs.
As on the rp2 port, add support to the zephyr port for a hard= argument
to explicitly choose between these, setting the default to False to match
the existing behaviour.
Adjust the stack-limit check to use the ISR stack while the callback is
dispatched so that hard IRQ callbacks work, as with machine_pin.c and
machine_i2c_target.c IRQ callbacks.
Signed-off-by: Chris Webb <chris@arachsys.com>
machine.Timer() has inconsistent behaviour between ports: some run
callbacks in hard IRQ context whereas others schedule them like soft IRQs.
As on the rp2 port, add support to the stm32 port for a hard= argument
to explicitly choose between these, setting the default to True to match
the existing behaviour.
Signed-off-by: Chris Webb <chris@arachsys.com>
machine.Timer() has inconsistent behaviour between ports: some run
callbacks in hard IRQ context whereas others schedule them like soft IRQs.
As on the rp2 port, add support to the renesas-ra port for a hard= argument
to explicitly choose between these, setting the default to True to match
the existing behaviour.
Signed-off-by: Chris Webb <chris@arachsys.com>
Now that mp_irq_dispatch() is available to dispatch arbitary hard/soft
callbacks, take advantage of this for rp2 machine.Timer. This should
slightly reduce binary size.
Signed-off-by: Chris Webb <chris@arachsys.com>
machine.Timer() has inconsistent behaviour between ports: some run
callbacks in hard IRQ context whereas others schedule them like soft IRQs.
As on the rp2 port, add support to the generic software timer for a hard=
argument to explicitly choose between these, setting the default to False
to match the existing behaviour. This enables hard timer callbacks for
the alif, mimxrt and samd ports.
Signed-off-by: Chris Webb <chris@arachsys.com>
Add a flag SOFT_TIMER_HARD_CALLBACK to request that a soft timer's python
callback is run directly from the IRQ handler with the scheduler and heap
locked, instead of being scheduled via mp_sched_schedule().
Signed-off-by: Chris Webb <chris@arachsys.com>
Separate out a routine to call an arbitrary function with arbitrary
argument either directly as a hard-IRQ handler or scheduled as a soft-IRQ
handler, adjusting mp_irq_handler() to wrap this. This can then be used
to implement other hard/soft callbacks, such as for machine.Timer.
Signed-off-by: Chris Webb <chris@arachsys.com>
Changes made here for N6 are:
- set RIF security attributes for ADC12
- clock ADC12 at 50MHz (maximum) so it runs at spec (max 5Msamp/sec)
- increase sampling time for standard channels to 46.5 cycles
- calibrate ADC in `adc.c`
- correctly clear ADC_CFGR1_RES bits in `machine_adc.c`
- set preselection register in `machine_adc.c`
Signed-off-by: Damien George <damien@micropython.org>
Otherwise an error message will pop up at the first instatiation
of the UART object.
Addresses #18122 / #18123.
Signed-off-by: robert-hh <robert@hammelrath.com>
Currently it seems if master branch doesn't build for 1-2 days then the
cached ESP-IDF install (1.6GB) and Zephyr workspace (3.1GB) caches expire.
Then each PR branch has to create its own redundant cache instead of
falling back to the default branch cache, which is expensive and quickly
blows our 10GB cache limit.
Currently this is mitigated (and possibly happens more frequently) due to
GitHub's relatively soft enforcement of the limit (at time of writing we're
using 33GB of 10GB), but apparently they're going to start enforcing it
more aggressively in October.
(We may find we need to do this twice a day...)
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
The Windows 8.1 sdksetup.exe in particular seems seems to fail
intermittently pretty often, so retry each step up to four times before
failing outright.
Delete the Chocolatey temp directory between each run, as it seems like the
root cause is a corrupt download.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>