Default to just calling python since that is most commonly available: the
official installer or zipfiles from python.org, anaconda, nupkg all result
in python being available but not python3. In other words: the default
used so far is wrong. Note that os.name is 'posix' when running the python
version which comes with Cygwin or MSys2 so they are not affected by this.
However of all possible ways to get Python on Windows, only Cygwin provides
no python command so update the default way for running tests in the
README.
With mboot encrpytion and fsload enabled, the DEBUG build -O0 compiler
settings result in mboot no longer fitting in the 32k sector. This commit
changes this to -Og which also brings it into line with the regular stm32
build.
MCUs with device-only USB peripherals (eg L0, WB) do not implement (at
least not in the ST HAL) the HAL_PCD_DisconnectCallback event. So if a USB
cable is disconnected the USB driver does not deinitialise itself
(usbd_cdc_deinit is not called) and the CDC driver can stay in the
USBD_CDC_CONNECT_STATE_CONNECTED state. Then if the USB was attached to
the REPL, output can become very slow waiting in usbd_cdc_tx_always for
500ms for each character.
The disconnect event is not implemented on these MCUs but the suspend event
is. And in the situation where the USB cable is disconnected the suspend
event is raised because SOF packets are no longer received.
The issue of very slow output on these MCUs is fixed in this commit (really
worked around) by adding a check in usbd_cdc_tx_always to see if the USB
device state is suspended, and, if so, breaking out of the 500ms wait loop.
This should also help all MCUs for a real USB suspend.
A proper fix for MCUs with device-only USB would be to implement or somehow
synthesise the HAL_PCD_DisconnectCallback event.
See issue #6672.
Signed-off-by: Damien George <damien@micropython.org>
PIO state machines can make a conditional jump on the state of a pin: the
`JMP PIN` command. This requires the pin to be configured with
`sm_config_set_jmp_pin`, but until now we didn't have a way of doing that
in MicroPython.
This commit adds a new `jmp_pin=None` argument to `StateMachine`. If it is
not `None` then we try to interpret it as a Pin, and pass its value to
`sm_config_set_jmp_pin`.
Signed-off-by: Tim Radvan <tim@tjvr.org>
Some devices have lower precision than 1ms for time_ns() (eg PYBv1.x has
3.9ms resolution of the RTC) so make the test more lenient for them.
Signed-off-by: Damien George <damien@micropython.org>
With a check for reproducible build date. Invocation of the test suite is
not needed because it's already run in another job.
Signed-off-by: iTitou <moiandme@gmail.com>
This environment variable, if defined during the build process,
indicates a fixed time that should be used in place of "now" when
such a time is explicitely referenced.
This allows for reproducible builds of micropython.
See https://reproducible-builds.org/specs/source-date-epoch/
Signed-off-by: iTitou <moiandme@gmail.com>
This should be enabled when the mp_raw_code_save_file function is needed.
It is enabled for mpy-cross, and a check for defined(__APPLE__) is added to
cover Mac M1 systems.
The upip module is frozen into ports supporting it, and it is included in
the source tree, so there is no need to get it from PyPi. Moreover the
PyPi package referred to is an out-of-date version of upip which is
basically unrelated to our upip.py because the source is taken from a fork
of micropython-lib instead of this repository.
Add "make submodules" to commands when building for the first time.
Otherwise, on a first time build, the submodules have not been checked out
and a lot of `fatal error: nrfx.h: No such file or directory` errors are
printed.
The main rules enforced are:
- At most 72 characters in the subject line, with a ": " in it.
- At most 75 characters per line in the body.
- No "noreply" email addresses.
This was added a long time ago in 75abee206d
when USB host support was added to the stm (now stm32) port, and when this
pyexec code was actually part of the stm port. It's unlikely to work as
intended anymore. If it is needed in the future then generic hook macros
can be added in pyexec.
It practically does the same as qstr_from_str and was only used in one
place, which should actually use the compile-time MP_QSTR_XXX form for
consistency; qstr_from_str is for runtime strings only.
If the _IRQ_L2CAP_RECV handler does the actual consumption of the incoming
data (i.e. via l2cap_recvinto), rather than setting a flag for
non-scheduler-context to handle it later, then two things can happen:
- It can starve the VM (i.e. the scheduled task never terminates). This is
because calling l2cap_recvinto will empty the rx buffer, which will grant
more credits to the channel (an HCI command), meaning more data can
arrive. This means that the loop in hal_uart.c that keeps reading HCI
data from the uart and executing NimBLE events as they are created will
not terminate, preventing other VM code from running.
- There's no flow control (i.e. data will arrive too quickly). The channel
shouldn't be given credits until after we return from scheduler context.
It's preferable that no work is done in scheduler/IRQ context. But to
prevent this being a problem this commit changes l2cap_recvinto so that if
it is called in IRQ context, and the Python handler empties the rx buffer,
then don't grant credits until the Python handler is complete.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
Don't clear the IPCC channel flag until we've actually handled the incoming
data, or else the wireless firmware may clobber the IPCC buffer if more
data arrives. This requires masking the IRQ until the data is handled.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This commit adds a new port "rp2" which targets the new Raspberry Pi RP2040
microcontroller.
The build system uses pure cmake (with a small Makefile wrapper for
convenience). The USB driver is TinyUSB, and there is a machine module
with most of the standard classes implemented. Some examples are provided
in the examples/rp2/ directory.
Work done in collaboration with Graham Sanderson.
Signed-off-by: Damien George <damien@micropython.org>
These args are already bounds checked and clipped, and using unsigned ints
can be more efficient. It also eliminates possible issues and compiler
warnings with shifting of signed integers.
Signed-off-by: Damien George <damien@micropython.org>
Adds a new compile-time option MICROPY_EMIT_THUMB_ARMV7M which is enabled
by default (to get existing behaviour) and which should be disabled (set to
0) when building native emitter support (@micropython.native) on ARMv6M
targets.
This returns a reference to the globals dict associated with the function,
ie the global scope that the function was defined in. This attribute is
read-only but the dict itself is modifiable, per CPython behaviour.
Signed-off-by: Damien George <damien@micropython.org>
It's enabled by default to retain the existing behaviour. A board can
disable this option if it manages mounting the filesystem itself, for
example in frozen code.
Signed-off-by: Damien George <damien@micropython.org>
Changes are:
- refactor to use new _create_element function
- support extended version of MOUNT element with block size
- support STATUS element
Signed-off-by: Damien George <damien@micropython.org>
This new element takes the form: (ELEM_TYPE_STATUS, 4, <address>). If this
element is present in the mboot command then mboot will store to the given
address the result of the filesystem firmware update process. The address
can for example be an RTC backup register.
Signed-off-by: Damien George <damien@micropython.org>
Instead it is now passed in as an optional parameter to the ELEM_MOUNT
element, with a compile-time configurable default.
Signed-off-by: Damien George <damien@micropython.org>
The superblock for littlefs is in block 0 and 1, but block 0 may be erased
or partially written, so block 1 must be checked if block 0 does not have a
valid littlefs superblock in it.
Prior to this commit, if block 0 did not contain a valid littlefs
superblock (but block 1 did) then the auto-detection would fail, mounting a
FAT filesystem would also fail, and the system would reformat the flash,
even though it may have contained a valid littlefs filesystem. This is now
fixed.
Signed-off-by: Damien George <damien@micropython.org>
The superblock for littlefs is in block 0 and 1, but block 0 may be erased
or partially written, so block 1 must be checked if block 0 does not have a
valid littlefs superblock in it.
Prior to this commit, the mount of a block device which auto-detected the
filysystem type would fail for littlefs if block 0 did not contain a valid
superblock. That is now fixed.
Signed-off-by: Damien George <damien@micropython.org>
This commit adds many new sections to the existing "Developing and building
MicroPython" chapter to make it all about the internals of MicroPython.
This work was done as part of Google's Season of Docs 2020.
Changes are:
- Use ubuntu-20.04 so that gcc-multilib installs without error.
- Use "fetch-depth: 100" to get history prior to pull request.
Signed-off-by: Damien George <damien@micropython.org>
* Fix a typo in the Makefile that prevented the debug build to be actually
enabled when BTYPE=debug is used.
* Add a missing header in modmachine.c that is used when a debug build is
created.
This commit improves some FTP implementation details for better
compatibility with FTP clients:
* The PWD command now puts quotes around the directory name before
returning it. This fixes BBEdit’s FTP client, which performs a PWD after
each CWD and gets confused if the returned directory path is not
surrounded by quotes.
* The FEAT command is now allowed before logging in. This fixes the lftp
client, which send FEAT first and gets confused (tries to use TLS) if the
server responds with 332.
With MICROPY_FLOAT_IMPL_FLOAT the results of utime.time(), gmtime() and
localtime() change only every 129 seconds. As one consequence
tests/extmod/vfs_lfs_mtime.py will fail on a unix port with LFS support.
With this patch these functions only return floats if
MICROPY_FLOAT_IMPL_DOUBLE is used. Otherwise they return integers.
According to documentation time() has a precision of at least 1 second.
This test runs for 2.5 seconds and calls all utime functions every 100ms.
Then it checks if they returned enough different results. All functions
with sub-second precision will return ~25 results. This test passes with
15 results or more. Functions that do not exist are skipped silently.
This allows sending arbitrary HCI commands and getting the response. The
return value of the function is the status of the command.
This is intended for debugging and not to be a part of the public API, and
must be enabled via mpconfigboard.h. It's currently only implemented for
NimBLE bindings.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
To match the definition of GENERATE_PACK_DFU, so a board can customise the
location/name of this file if needed.
Signed-off-by: Damien George <damien@micropython.org>
To have at least one board configured with MBOOT_ENABLE_PACKING, for CI
testing purposes and demonstration of the feature.
Signed-off-by: Damien George <damien@micropython.org>
This commit adds support to stm32's mboot for signe, encrypted and
compressed DFU updates. It is based on inital work done by Andrew Leech.
The feature is enabled by setting MBOOT_ENABLE_PACKING to 1 in the board's
mpconfigboard.mk file, and by providing a header file in the board folder
(usually called mboot_keys.h) with a set of signing and encryption keys
(which can be generated by mboot_pack_dfu.py). The signing and encryption
is provided by libhydrogen. Compression is provided by uzlib. Enabling
packing costs about 3k of flash.
The included mboot_pack_dfu.py script converts a .dfu file to a .pack.dfu
file which can be subsequently deployed to a board with mboot in packing
mode. This .pack.dfu file is created as follows:
- the firmware from the original .dfu is split into chunks (so the
decryption can fit in RAM)
- each chunk is compressed, encrypted, a header added, then signed
- a special final chunk is added with a signature of the entire firmware
- all chunks are concatenated to make the final .pack.dfu file
The .pack.dfu file can be deployed over USB or from the internal filesystem
on the device (if MBOOT_FSLOAD is enabled).
See #5267 and #5309 for additional discussion.
Signed-off-by: Damien George <damien@micropython.org>
Prior to this fix, the final piece of data in a compressed file may have
been lost when decompressing.
Signed-off-by: Damien George <damien@micropython.org>
This library is a small and easy-to-use cryptographic library which is well
suited to embedded systems.
Signed-off-by: Damien George <damien@micropython.org>
The original logic of reducing a full path to a relative one assumes
"tests/misc" is in the filename which is limited in usage: it never works
for CPython on Windows since that will use a backslash as path separator,
and also won't work when the filename is a path not relative to the tests
directory which happens for example in the common case of running
"./run-tests -d misc".
Fix all cases by printing only the bare filename, which requires them all
to start with sys_settrace_ hence the renaming.
Changes are:
- Remove include of stm32's adc.h because it was recently changed and is
no longer compatible with teensy (and not used anyway).
- Remove define of __disable_irq in mpconfigport.h because it was clashing
with an equivalent definition in core/mk20dx128.h.
- Add -Werror to CFLAGS, and change -std=gnu99 to -std=c99.
Signed-off-by: Damien George <damien@micropython.org>
Mboot builds do not use the external SPI flash in caching mode, and
explicitly disabling it saves RAM and a small bit of flash.
Signed-off-by: Damien George <damien@micropython.org>
This only needs to be enabled if a board uses FAT FS on external SPI flash.
When disabled (and using external SPI flash) 4k of RAM can be saved.
Signed-off-by: Damien George <damien@micropython.org>
When littlefs is enabled extended reading must be supported, and using this
function to read the first block for auto-detection is more efficient (a
smaller read) and does not require a cached SPI-flash read.
Signed-off-by: Damien George <damien@micropython.org>
These functions enable SDRAM data retention in stop mode. Example usage,
in mpconfigboard.h:
#define MICROPY_BOARD_ENTER_STOP sdram_enter_low_power();
#define MICROPY_BOARD_LEAVE_STOP sdram_leave_low_power();
Calculate the bit timing from baudrate if provided, allowing sample point
override. This makes it a lot easier to make CAN work between different
MCUs with different clocks, prescalers etc.
Tested on F4, F7 and H7 Y/V variants.
This commit prevents uos.mount() from raising an AttributeError.
vfs_autodetect() is supposed to return an object that has a "mount" method,
so if no filesystem is found it should raise an OSError(ENODEV) and not
return the bdev itself which has no "mount" method.
Mounting a bdev directly tries to auto-detect the filesystem and if none is
found an OSError(19,) should be raised.
The fourth parameter of readblocks() and writeblocks() must be optional to
support ports with MICROPY_VFS_FAT=1. Otherwise mounting a bdev may fail
because looking for a FATFS will call readblocks() with only 3 parameters.
Since CPython 3.8 the optional "sep" argument to hexlify is officially
supported, so update comments in the code and the docs to reflect this.
Signed-off-by: Damien George <damien@micropython.org>
As a general pattern, required positional arguments that are not named do
not need to be parsed using mp_arg_parse_all().
Signed-off-by: Damien George <damien@micropython.org>
Because the version included in xtensa-lx106-elf-standalone.tar.gz needs
Python 2 (and pyserial for Python 2).
Signed-off-by: Damien George <damien@micropython.org>
This much buffer space is required for CDC data out endpoints to avoid any
buffer overflows when the USB CDC is saturated with data.
Signed-off-by: Damien George <damien@micropython.org>
The nrf52840-mdk-usb-dongle and pca10050 comes with a pre-flashed
bootloader (OpenBootloader).
This commit updates the boards "mpconfigboard.mk" to use DFU as
default flashing method and set the corresponding BOOTLOADER
settings such that nrf52840_open_bootloader_1.2.x.ld linker
script is used.
The default DFU flashing method can be disabled by issuing "DFU=0"
when invoking make. This will lead to "segger" being used as default
flashing tool. When using "DFU=0", the linker scripts will not
compensate for any MBR and Bootloader region being present, and might
overwrite them if they were present.
The commit also removes the custom linker script specific to
nrf52840-mdk-usb-dongle as it now points to a generic.
Updated nrf52840-mdk-usb-dongle's README.md to be more clear on
how to deploy the built firmware.
The port README.md has also been updated. In the list of target
boards a new column has been added to indicate which bootloader
is present on the target board. And for consistency, changed all
examples in the README.md to use "deploy" instead of "flash".
An additional Makefile parameter NRFUTIL_PORT can be set in order
to define the serial port to used for the DFU (Default: /dev/ttyACM0).
The "nrfutil" that is used as flasher towards OpenBootloader is
available for installation through Python "pip".
In case of SD=s140, SoftDevice ID 0xB6 is passed to nrfutil's package
generation which corresponds to SoftDevice s140 v6.1.1.
Add the option for "mpconfigboard.mk" to define whether the
board hosts a bootloader or not. The BOOTLOADER make variable
must be set to the name of the bootloader.
When the BOOTLOADER name is set it is also required to supply
the BOOTLOADER_VERSION_MAJOR and the BOOTLOADER_VERSION_MINOR
from the "mpconfigboards.mk". These will be used to resolve which
bootloader linker script that should be passed to the linker.
The BOOTLOADER section also supplies the C-compiler with
BOOTLOADER_<bootloader name>=<version major><version minor>
as a compiler define. This is for future use in case a bootloader
needs to do modification to the startup files or similar (like
setting the VTOR specific to a version of a bootloader).
Adding variables that can be set from other linker scripts:
- _bootloader_head_size:
Bootloader flash offset in front of the application.
- _bootloader_tail_size:
Bootloader offset from the tail of the flash.
In case the bootloader is located at the end.
- _bootloader_head_ram_size:
Bootloader RAM usage in front of the application.
Updated calculations of application flash and RAM.
Two issues are tackled:
1. The calculation of the correct length to print is fixed to treat the
precision as a maximum length instead as the exact length.
This is done for both qstr (%q) and for regular str (%s).
2. Fix the incorrect use of mp_printf("%.*s") to mp_print_strn().
Because of the fix of above issue, some testcases that would print
an embedded null-byte (^@ in test-output) would now fail.
The bug here is that "%s" was used to print null-bytes. Instead,
mp_print_strn is used to make sure all bytes are outputted and the
exact length is respected.
Test-cases are added for both %s and %q with a combination of precision
and padding specifiers.
The zephyr function net_shell_cmd_iface() was removed in zephyr v1.14.0,
therefore the MicroPython zephyr port did not build with newer zephyr
versions when CONFIG_NET_SHELL=y. Replace with a more general
shell_exec() function that can execute any zephyr shell command. For
example:
>>> zephyr.shell_exec("net")
Subcommands:
allocs :Print network memory allocations.
arp :Print information about IPv4 ARP cache.
conn :Print information about network connections.
dns :Show how DNS is configured.
events :Monitor network management events.
gptp :Print information about gPTP support.
iface :Print information about network interfaces.
ipv6 :Print information about IPv6 specific information and
configuration.
mem :Print information about network memory usage.
nbr :Print neighbor information.
ping :Ping a network host.
pkt :net_pkt information.
ppp :PPP information.
resume :Resume a network interface
route :Show network route.
stacks :Show network stacks information.
stats :Show network statistics.
suspend :Suspend a network interface
tcp :Connect/send/close TCP connection.
vlan :Show VLAN information.
websocket :Print information about WebSocket connections.
>>> zephyr.shell_exec("kernel")
kernel - Kernel commands
Subcommands:
cycles :Kernel cycles.
reboot :Reboot.
stacks :List threads stack usage.
threads :List kernel threads.
uptime :Kernel uptime.
version :Kernel version.
Signed-off-by: Maureen Helm <maureen.helm@nxp.com>
The -Og optimisation level produces a more realistic build, gives a better
debugging experience, and generates smaller code than -O0, allowing debug
builds to fit in flash.
This commit also assigns variables in can.c to prevent warnings when -Og is
used, and builds a board in CI with DEBUG=1 enabled.
Signed-off-by: Damien George <damien@micropython.org>
Allows reserving CAN, I2C, SPI, Timer and UART peripherals. If reserved
the peripheral cannot be accessed from Python.
Signed-off-by: Damien George <damien@micropython.org>
Even though IRQs are disabled this seems to be required on H7 Rev Y,
otherwise Systick interrupt triggers and the MCU leaves the stop mode
immediately.
This commit saves OSCs/PLLs state before STOP mode and restores them on
exit. Some boards use HSI48 for USB for example, others have PLL2/3
enabled, etc.
Rather than dealing with the different int types, just pass them all as a
single array of mp_int_t with n_unsigned (before addr) and n_signed (after
addr).
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This adds `_IRQ_GET_SECRET` and `_IRQ_SET_SECRET` events to allow the BT
stack to request the Python code retrive/store/delete secret key data. The
actual keys and values are opaque to Python and stack-specific.
Only NimBLE is implemented (pending moving btstack to sync events). The
secret store is designed to be compatible with BlueKitchen's TLV store API.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This allows the application to be notified if any of encrypted,
authenticated and bonded state change, as well as the encryption key size.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
Enable it on STM32/Unix NimBLE only (pairing/bonding requires synchronous
events and full bindings).
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
Instead of returning None/bool from the IRQ, return None/int (where a zero
value means success). This mirrors how the L2CAP_ACCEPT return value
works.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This widens the characteristic/descriptor flags to 16-bit, to allow setting
encryption/authentication requirements.
Sets the required flags for NimBLE and btstack implementations.
The BLE.FLAG_* constants will eventually be deprecated in favour of copy
and paste Python constants (like the IRQs).
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This allows the application to be notified of changes to the connection
interval, connection latency and supervision timeout.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This commit switches the roles of the helper task from a cancellation task
to a runner task, to get the correct semantics for cancellation of
wait_for.
Some uasyncio tests are now disabled for the native emitter due to issues
with native code generation of generators and yield-from.
Fixes#5797.
Signed-off-by: Damien George <damien@micropython.org>
This is added because task.coro==None is no longer the way to detect if a
task is finished. Providing a (CPython compatible) function for this
allows the implementation to be abstracted away.
Signed-off-by: Damien George <damien@micropython.org>
When a tasks raises an exception which is uncaught, and no other task
await's on that task, then an error message is printed (or a user function
called) via a call to Loop.call_exception_handler. In CPython this call is
made when the Task object is freed (eg via reference counting) because it's
at that point that it is known that the exception that was raised will
never be handled.
MicroPython does not have reference counting and the current behaviour is
to deal with uncaught exceptions as early as possible, ie as soon as they
terminate the task. But this can be undesirable because in certain cases
a task can start and raise an exception immediately (before any await is
executed in that task's coro) and before any other task gets a chance to
await on it to catch the exception.
This commit changes the behaviour so that tasks which end due to an
uncaught exception are scheduled one more time for execution, and if they
are not await'ed on by the next scheduling loop, then the exception handler
is called (eg the exception is printed out).
Signed-off-by: Damien George <damien@micropython.org>
This commit adds support to pyboard.py for the new raw REPL paste mode.
Note that this new pyboard.py is fully backwards compatible with old
devices (it detects if the device supports the new raw REPL paste mode).
Signed-off-by: Damien George <damien@micropython.org>
Background: the friendly/normal REPL is intended for human use whereas the
raw REPL is for computer use/automation. Raw REPL is used for things like
pyboard.py script_to_run.py. The normal REPL has built-in flow control
because it echos back the characters. That's not so with raw REPL and flow
control is just implemented by rate limiting the amount of data that goes
in. Currently it's fixed at 256 byte chunks every 10ms. This is sometimes
too fast for slow MCUs or systems with small stdin buffers. It's also too
slow for a lot of higher-end MCUs, ie it could be a lot faster.
This commit adds a new raw REPL mode which includes flow control: the
device will echo back a character after a certain number of bytes are sent
to the host, and the host can use this to regulate the data going out to
the device. The amount of characters is controlled by the device and sent
to the host before communication starts. This flow control allows getting
the maximum speed out of a serial link, regardless of the link or the
device at the other end.
Also, this new raw REPL mode parses and compiles the incoming data as it
comes in. It does this by creating a "stdin reader" object which is then
passed to the lexer. The lexer requests bytes from this "stdin reader"
which retrieves bytes from the host, and does flow control. What this
means is that no memory is used to store the script (in the existing raw
REPL mode the device needs a big buffer to read in the script before it can
pass it on to the lexer/parser/compiler). The only memory needed on the
device is enough to parse and compile.
Finally, it would be possible to extend this new raw REPL to allow bytecode
(.mpy files) to be sent as well as text mode scripts (but that's not done
in this commit).
Some results follow. The test was to send a large 33k script that contains
mostly comments and then prints out the heap, run via pyboard.py large.py.
On PYBD-SF6, prior to this PR:
$ ./pyboard.py large.py
stack: 524 out of 23552
GC: total: 392192, used: 34464, free: 357728
No. of 1-blocks: 12, 2-blocks: 2, max blk sz: 2075, max free sz: 22345
GC memory layout; from 2001a3f0:
00000: h=hhhh=======================================hhBShShh==h=======h
00400: =====hh=B........h==h===========================================
00800: ================================================================
00c00: ================================================================
01000: ================================================================
01400: ================================================================
01800: ================================================================
01c00: ================================================================
02000: ================================================================
02400: ================================================================
02800: ================================================================
02c00: ================================================================
03000: ================================================================
03400: ================================================================
03800: ================================================================
03c00: ================================================================
04000: ================================================================
04400: ================================================================
04800: ================================================================
04c00: ================================================================
05000: ================================================================
05400: ================================================================
05800: ================================================================
05c00: ================================================================
06000: ================================================================
06400: ================================================================
06800: ================================================================
06c00: ================================================================
07000: ================================================================
07400: ================================================================
07800: ================================================================
07c00: ================================================================
08000: ================================================================
08400: ===============================================.....h==.........
(349 lines all free)
(the big blob of used memory is the large script).
Same but with this PR:
$ ./pyboard.py large.py
stack: 524 out of 23552
GC: total: 392192, used: 1296, free: 390896
No. of 1-blocks: 12, 2-blocks: 3, max blk sz: 40, max free sz: 24420
GC memory layout; from 2001a3f0:
00000: h=hhhh=======================================hhBShShh==h=======h
00400: =====hh=h=B......h==.....h==....................................
(381 lines all free)
The only thing in RAM is the compiled script (and some other unrelated
items).
Time to download before this PR: 1438ms, data rate: 230,799 bits/sec.
Time to download with this PR: 119ms, data rate: 2,788,991 bits/sec.
So it's more than 10 times faster, and uses significantly less RAM.
Results are similar on other boards. On an stm32 board that connects via
UART only at 115200 baud, the data rate goes from 80kbit/sec to
113kbit/sec, so gets close to saturating the UART link without loss of
data.
The new raw REPL mode also supports a single ctrl-C to break out of this
flow-control mode, so that a ctrl-C can always get back to a known state.
It's also backwards compatible with the original raw REPL mode, which is
still supported with the same sequence of commands. The new raw REPL
mode is activated by ctrl-E, which gives an error on devices that do not
support the new mode.
Signed-off-by: Damien George <damien@micropython.org>
Travis now limits the amount of free minutes for open-source projects, and
it does not provide enough for this project. So stop using it and instead
use on GitHub Actions.
Signed-off-by: Damien George <damien@micropython.org>
Also known as L2CAP "connection oriented channels". This provides a
socket-like data transfer mechanism for BLE.
Currently only implemented for NimBLE on STM32 / Unix.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
Hardware I2C implementations must provide a .init() protocol method if they
want to support reconfiguration. Otherwise the default is that i2c.init()
raises an OSError (currently the case for all ports).
mp_machine_soft_i2c_locals_dict is renamed to mp_machine_i2c_locals_dict to
match the generic SPI bindings.
Fixes issue #6623 (where calling .init() on a HW I2C would crash).
Signed-off-by: Damien George <damien@micropython.org>
This fixes the build for non-STM32WB based boards when the NimBLE submodule
has not been fetched, and also allows STM32WB boards to build with BLE
disabled.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This is needed to moderate concurrent access to the internal flash, as
while an erase/write is in progress execution will stall on the wireless
core due to the bus being locked.
This implements Figure 10 from AN5289 Rev 3.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This commit switches the STM32WB HCI interface (between the two CPUs) to
require the use of MICROPY_PY_BLUETOOTH_USE_SYNC_EVENTS, and as a
consequence to require NimBLE. IPCC RX IRQs now schedule the NimBLE
handler to run via mp_sched_schedule.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This test currently passes on Unix/PYBD, but fails on WB55 because it lacks
synchronisation of the internal flash.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This changes stm32 from using PENDSV to run NimBLE to use the MicroPython
scheduler instead. This allows Python BLE callbacks to be invoked directly
(and therefore synchronously) rather than via the ringbuffer.
The NimBLE UART HCI and event processing now happens in a scheduled task
every 128ms. When RX IRQ idle events arrive, it will also schedule this
task to improve latency.
There is a similar change for the unix port where the background thread now
queues the scheduled task.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This requires that the event handlers are called from non-interrupt context
(i.e. the MicroPython scheduler).
This will allow the BLE stack (e.g. NimBLE) to run from the scheduler
rather than an IRQ like PENDSV, and therefore be able to invoke Python
callbacks directly/synchronously. This allows writing Python BLE handlers
for events that require immediate response such as _IRQ_READ_REQUEST (which
was previous a hard IRQ) and future events relating to pairing/bonding.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
Using a semaphore (the previous approach) will only run the UART, whereas
for startup we need to also run the event queue.
This change makes it run the full scheduler hook.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
Instead of having the stack indicate a "start", "data"..., "end", pass
through the data in one callback as an array of chunks of data.
This is because the upcoming non-ringbuffer modbluetooth implementation
cannot buffer the data in the ringbuffer and requires instead a single
callback with all the data, to pass to the Python callback.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
Use the same `wait_for_event` in all tests that doesn't hold a reference to
the event data tuple and handles repeat events.
Also fix a few misc reliability issues around timeouts and sequencing.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
Devices with RTC backup-batteries have been shown (very rarely) to have
incorrect RTC prescaler values. Such incorrect values mean the RTC counts
fast or slow, and will be wrong forever if the power/backup-battery is
always present.
This commit detects such a state at start up (hard reset) and corrects it
by reconfiguring the RTC prescaler values.
Signed-off-by: Damien George <damien@micropython.org>
And rename SRC_HAL -> HAL_SRC_C and SRC_USBDEV -> USBDEV_SRC_C for
consistency with other source variables.
Follow on from 0fff2e03fe
Signed-off-by: Damien George <damien@micropython.org>
Prior to this change machine.mem32['foo'] (or using any other non-integer
subscript) could result in a fault due to 'foo' being interpreted as an
integer. And when writing code it's hard to tell if the fault is due to a
bad subscript type, or an integer subscript that specifies an invalid
memory address.
The type of the object used in the subscript is now tested to be an
integer by using mp_obj_get_int_truncated instead of
mp_obj_int_get_truncated. The performance hit of this change is minimal,
and machine.memX objects are more for convenience than performance (there
are many other ways to read/write memory in a faster way),
Fixes issue #6588.
The file `$(BUILD)/firmware.bin` was used by the target `deploy-stlink` and
`deploy-openocd` but it was generated indirectly by the target
`firmware.dfu`.
As this file could be used to program boards directly by a Mass Storage
copy, it's better to make it explicitly generated.
Additionally, some target are refactored to remove redundancy and be more
explicit on dependencies.
This gives a substantial speedup of the preprocessing step, i.e. the
generation of qstr.i.last. For example on a clean build, making
qstr.i.last:
21s -> 4s on STM32 (WB55)
8.9 -> 1.8s on Unix (dev).
Done in collaboration with @stinos.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
Running the update inside the soft-reset loop will mean that (on boards
like PYBD that use a bootloader) the same reset mode is used each
reset loop, eg factory reset occurs each time.
Signed-off-by: Damien George <damien@micropython.org>
Add working example code to provide a starting point for users with files
that they can just copy, and include the modules in the coverage test to
verify the complete user C module build functionality. The cexample module
uses the code originally found in cmodules.rst, which has been updated to
reflect this and partially rewritten with more complete information.
Support building .cpp files and linking them into the micropython
executable in a way similar to how it is done for .c files. The main
incentive here is to enable user C modules to use C++ files (which are put
in SRC_MOD_CXX by py.mk) since the core itself does not utilize C++.
However, to verify build functionality a unix overage test is added. The
esp32 port already has CXXFLAGS so just add the user modules' flags to it.
For the unix port use a copy of the CFLAGS but strip the ones which are not
usable for C++.
Support C++ code in .cpp files by providing CXX counterparts of the
_USERMOD_ flags we have for C already. This merely enables the Makefile of
user C modules to use variables specific to C++ compilation, it is still up
to each port's main Makefile to also include these in the build.
When SCR_QSTR contains C++ files they should be preprocessed with the same
compiler flags (CXXFLAGS) as they will be compiled with, to make sure code
scanned for QSTR occurrences is effectively the code used in the rest of
the build. The 'split SCR_QSTR in .c and .cpp files and process each with
different flags' logic isn't trivial to express in a Makefile and the
existing principle for deciding which files to preprocess was already
rather complicated, so the actual preprocessing is moved into
makeqstrdefs.py completely.
When process_file() is passed a preprocessed C++ file for instance it won't
find any lines containing .c files and the last_fname variable remains
None, so handle that gracefully.
If a port provides MICROPY_PY_URANDOM_SEED_INIT_FUNC as a source of
randomness then this will be used when urandom.seed() is called without
an argument (or with None as the argument) to seed the pRNG.
Other related changes in this commit:
- mod_urandom___init__ is changed to call seed() without arguments, instead
of explicitly passing in the result of MICROPY_PY_URANDOM_SEED_INIT_FUNC.
- mod_urandom___init__ will only ever seed the pRNG once (before it could
seed it again if imported by, eg, random and then urandom).
- The Yasmarang state is moved to the BSS for builds where the state is
guaranteed to be initialised on import of the (u)random module.
Signed-off-by: Damien George <damien@micropython.org>
The same seed will only occur if the board is the same, the RTC has the
same time (eg freshly powered up) and the first call to this function (eg
via an "import random") is done at exactly the same time since reset.
Signed-off-by: Damien George <damien@micropython.org>
For seeding, the RNG function of the ESP-IDF is used, which is told to be a
true RNG, at least when WiFi or Bluetooth is enabled. Seeding on import is
as per CPython. To obtain a reproducible sequence of pseudo-random numbers
one must explicitly seed with a known value.
Prior to this commit, the ADC calibration code was never executing because
ADVREGEN bit was set making the CR register always non-zero.
This commit changes the logic so that ADC calibration is always run when
the ADC is disabled and an ADC channel is initialised. It also uses the LL
API functions to do the calibration, to make sure it is done correctly on
each MCU variant.
Signed-off-by: Damien George <damien@micropython.org>
When threading is enabled without the GIL then there can be races between
the threads accessing the globals dict. Avoid this issue by making sure
all globals variables are allocated before starting the threads.
Signed-off-by: Damien George <damien@micropython.org>
Newer GCC versions are able to warn about switch cases that fall
through. This is usually a sign of a forgotten break statement, but in
the few cases where a fall through is intended we annotate it with this
macro to avoid the warning.
Like Clang, GCC warns about this file, but only with -Woverride-init
which is enabled by -Wextra. Disable the warnings for this file just
like we do for Clang to make -Wextra happy.
When compiling with -Wextra which includes -Wmissing-field-initializers
GCC will warn that the defval field of mp_arg_val_t is not initialized.
This is just a warning as it is defined to be zero initialized, but since
it is a union it makes sense to be explicit about which member we're
going to use, so add the explicit initializers and get rid of the
warning.
On x86 chars are signed, but we're comparing a char to '0' + unsigned int,
which is promoted to an unsigned int. Let's promote the char to unsigned
before doing the comparison to avoid weird corner cases.
The function scope_find_or_add_id used to take a scope_kind_t enum and
save it in an uint8_t. Saving an enum in a uint8_t is fine, but
everywhere this function is called it is not actually given a
scope_kind_t but an anonymous enum instead. Let's give this enum a name
and use that as the argument type.
This doesn't change the generated code, but is a C type mismatch that
unfortunately doesn't show up unless you enable -Wenum-conversion.
If the device is not connected over USB CDC to a host then all output to
the CDC (eg initial boot messages) is written to the CDC TX buffer with
wrapping, so that the most recent data is retained when the USB CDC is
eventually connected (eg so the REPL banner is displayed upon connection).
This commit fixes a bug in this behaviour, which was likely introduced in
e4fcd216e0, where the initial data in the CDC
TX buffer is repeated multiple times on first connection of the device to
the host.
Signed-off-by: Damien George <damien@micropython.org>
This is a generally useful feature and because it's part of the object
model it cannot be added at runtime by some loadable Python code, so enable
it on the standard unix build.
The last argument of TUD_CDC_DESCRIPTOR() is the endpoint size (or
wMaxPacketSize), not the CDC RX buffer size (which can be larger than the
endpoint size).
Signed-off-by: Damien George <damien@micropython.org>
When installing WS firmware, the very first GET_STATE can take several
seconds to respond (especially with the larger binaries like
BLE_stack_full).
Allows stm.rfcore_sys_hci to take an optional timeout, defaulting to
SYS_ACK_TIMEOUT_MS (which is 250ms).
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
The flash can sometimes be in an already-unlocked state, and attempting to
unlock it again will cause an immediate reset. So make _Flash.unlock()
check FLASH_CR_LOCK to get the current state.
Also fix some magic numbers for FLASH_CR_LOCK AND FLASH_CR_STRT.
The machine.reset() could be removed because it no longer crashes now that
the flash unlock is fixed.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
This commit adds a script that can be run on-device to install FUS and WS
binaries from the filesystem. Instructions for use are provided in
the rfcore_firmware.py file.
The commit also removes unneeded functionality from the existing rfcore.py
debug script (and renames it rfcore_debug.py).
The new functions provide FUS/WS status, version and SYS HCI commands:
- stm.rfcore_status()
- stm.rfcore_fw_version(fw_id)
- stm.rfcore_sys_hci(ogf, ocf, cmd)
Changes are:
- Fix missing IRQ handler when SDMMC2 is used instead of SDMMC1 with H7
MCUs.
- Removed outdated H7 series compatibility macros.
- Defined common IRQ handler macro for F4 series.
It requires mp_hal_time_ns() to be provided by a port. This function
allows very accurate absolute timestamps.
Enabled on unix, windows, stm32, esp8266 and esp32.
Signed-off-by: Damien George <damien@micropython.org>
With a warning that this way of constructing software I2C/SPI is
deprecated. The check and warning will be removed in a future release.
This should help existing code to migrate to the new SoftI2C/SoftSPI types.
Signed-off-by: Damien George <damien@micropython.org>
Previous commits removed the ability for one I2C/SPI constructor to
construct both software- or hardware-based peripheral instances. Such
construction is now split to explicit soft and non-soft types.
This commit makes both types available in all ports that previously could
create both software and hardware peripherals: machine.I2C and machine.SPI
construct hardware instances, while machine.SoftI2C and machine.SoftSPI
create software instances.
This is a breaking change for use of software-based I2C and SPI. Code that
constructed I2C/SPI peripherals in the following way will need to be
changed:
machine.I2C(-1, ...) -> machine.SoftI2C(...)
machine.I2C(scl=scl, sda=sda) -> machine.SoftI2C(scl=scl, sda=sda)
machine.SPI(-1, ...) -> machine.SoftSPI(...)
machine.SPI(sck=sck, mosi=mosi, miso=miso)
-> machine.SoftSPI(sck=sck, mosi=mosi, miso=miso)
Code which uses machine.I2C and machine.SPI classes to access hardware
peripherals does not need to change.
Signed-off-by: Damien George <damien@micropython.org>
The SoftSPI constructor is now used soley to create SoftSPI instances, it
can no longer delegate to create a hardware-based SPI instance.
Signed-off-by: Damien George <damien@micropython.org>
The SoftI2C constructor is now used soley to create SoftI2C instances, it
can no longer delegate to create a hardware-based I2C instance.
Signed-off-by: Damien George <damien@micropython.org>
Also rename machine_i2c_type to mp_machine_soft_i2c_type. These changes
make it clear that it's a soft-I2C implementation, and match SoftSPI.
Signed-off-by: Damien George <damien@micropython.org>
Some downstream projects may use tags in their repositories for more than
just designating MicroPython releases. In those cases, the
makeversionhdr.py script would end up using a different tag than intended.
So tell `git describe` to only match tags that look like a MicroPython
version tag, such as `v1.12` or `v2.0`.
Zephyr v2.4.0 added a const qualifier to usages of struct device to
allow storing device driver instances exclusively in flash and thereby
reduce ram footprint.
Signed-off-by: Maureen Helm <maureen.helm@nxp.com>
The use of -S ensures that only the CPython standard library is accessible,
which makes tests run the same regardless of any site-packages that are
installed. It also improves start-up time of CPython, reducing the overall
time spent running the test suite.
tests/basics/containment.py is updated to work around issue with old Python
versions not being able to str-format a dict-keys object, which becomes
apparent when -S is used.
Signed-off-by: Damien George <damien@micropython.org>
A read-only memoryview object is a better representation of the data, which
is owned by the ubluetooth module and may change between calls to the
user's irq callback function.
Signed-off-by: Damien George <damien@micropython.org>
Calling the bytes constructor on a bytes object returns the original bytes
object. This saves allocating a new instance, and matches CPython.
Signed-off-by: Iyassou Shimels <s.iyassou@gmail.com>
Make the instructions more complete by documenting all needed steps for
starting from scratch. Also add a section for MSYS2 since the Travis build
uses it as well and it's a good alternative for Cygwin. Remove the mingw32
reference since it's not readily available anymore in most Linux distros
nor compiles successfully.
Prior to this commit, uos.chdir('/') followed by uos.stat('noexist') would
succeed that stat even though the entry did not exist (some other functions
like listdir would have similar issues). This is because, if the current
directory was the root and the path was relative, mp_vfs_lookup_path would
return success for bad paths.
Signed-off-by: Damien George <damien@micropython.org>
The device info table has a different layout when core 2 is in FUS mode.
In particular it's larger than the 32 bytes used when in WS mode and if the
correct amount of space is not allocated then the end of the table may be
overwritten with other data (eg with FUS version 0.5.3). So update the
structure to fix this.
Also update rfcore.py to disable IRQs (which are enabled by rfcore.c), to
not depend on uctypes, and to not require the asm_thumb emitter.
Signed-off-by: Damien George <damien@micropython.org>
And enable this feature on unix, the coverage variant. The .exp test file
is needed so the test can run on CPython versions prior to "@=" operator
support.
Signed-off-by: Damien George <damien@micropython.org>
For time-based functions that work with absolute time there is the need for
an Epoch, to set the zero-point at which the absolute time starts counting.
Such functions include time.time() and filesystem stat return values. And
different ports may use a different Epoch.
To make it clearer what functions use the Epoch (whatever it may be), and
make the ports more consistent with their use of the Epoch, this commit
renames all Epoch related functions to include the word "epoch" in their
name (and remove references to "2000").
Along with this rename, the following things have changed:
- mp_hal_time_ns() is now specified to return the number of nanoseconds
since the Epoch, rather than since 1970 (but since this is an internal
function it doesn't change anything for the user).
- littlefs timestamps on the esp8266 have been fixed (they were previously
off by 30 years in nanoseconds).
Otherwise, there is no functional change made by this commit.
Signed-off-by: Damien George <damien@micropython.org>
To portably get the Epoch. This is simply aliased to localtime() on ports
that are not timezone aware.
Signed-off-by: Damien George <damien@micropython.org>
This commit removes release-specific builds for the esp8266 and makes the
normal build of the GENERIC board more like the release build. This makes
esp8266 like all the other ports, for which there is no difference between
a daily build and a release build, making things less confusing.
Release builds were previously defined by UART_OS=-1 (disable OS messages)
and using manifest_release.py to include more frozen modules.
The changes in this commit are:
- Remove manifest_release.py.
- Add existing modules from manifest_release.py (except example code)
to the GENERIC board's manifest.py file.
- Change UART_OS default to -1 to disable OS messages by default.
Signed-off-by: Damien George <damien@micropython.org>
PPP support was disabled in 96008ff59a -
marked as "unsupported" due to an early IDF v4 release. With the currently
supported IDF v4.x version - 4c81978a - it appears to be working just fine.
This commit changes the default logging level on all esp32 boards to ERROR.
The esp32 port is now stable enough that it makes sense to remove the info
logs to make the output cleaner, and to match other ports. More verbose
logging can always be reenabled via esp.osdebug().
This also fixes issue #6354, error messages from NimBLE: the problem is
that ble.active(True) will cause the IDF's NimBLE port to reset the
"NimBLE" tag back to the default level (which was INFO prior to this
commit). Even if the user had previously called esp.osdebug(None), because
the IDF is setting the "NimBLE" tag back to the default (INFO), the
messages will continue to be shown.
The one quirk is that if the user does want to see the additional logging,
then they must call esp.osdebug(0, 3) after ble.active(True) to undo the
IDF setting the level back to the default (now ERROR). This means that
it's impossible (via Python/esp.osdebug) to see stack-startup logging,
you'd have to recompile with the default level changed back to INFO.
For the following 3 functions, previously the code relied on whether the
arg was passed at all, to make it optional. Now it allows them to be
explicitly `None` to indicate they are not used:
- gatts_notify(..., [data])
- gattc_discover_services(..., [uuid])
- gattc_discover_characteristics(..., [uuid])
Also ensure that the uuid arguments are actually instances of the uuid
type, and fix handling of the 5th arg in gattc_discover_characteristics().
Support freezing modules via manifest.py for consistency with the other
ports. In essence this comes down to calling makemanifest.py and adding
the resulting .c file to the build. Note the file with preprocessed qstrs
has been renamed to match what makemanifest.py expects and which is also
the name all other ports use.
The msvc compiler doesn't accept zero-sized arrays so let the freezing
process generate compatible C code in case no modules are found and the
involved arrays are all empty. This doesn't affect the functionality in
any way because those arrays only get accessed when mp_frozen_mpy_names
contains names, i.e. when modules are actually found.
Prior to this commit, pow(-2, float('nan')) would return (nan+nanj), or
raise an exception on targets that don't support complex numbers. This is
fixed to return simply nan, as CPython does.
Signed-off-by: Damien George <damien@micropython.org>
The mpconfigport.h file is an internal header and should only ever be
included once by mpconfig.h.
Signed-off-by: Damien George <damien@micropython.org>
MP_BC_CALL_FUNCTION will leave the result on the Python stack, so that
result must be discarded by MP_BC_POP_TOP.
Signed-off-by: Damien George <damien@micropython.org>
This allows prototyping rfcore.c improvements from Python.
This was mostly written by @dpgeorge with small modifications to work after
rfcore_init() by @jimmo.
Before this change there was up to a 128ms delay on incoming payloads from
CPU2 as it was polled by SysTick. Now the RX IRQ immediately schedules the
PendSV.
This is required to allow using WS firmware newer than 1.1.1 concurrently
with USB (e.g. USB VCP). It prevents CPU2 from modifying the CLK48 config
on boot.
Tested on WS=1.8 FUS=1.1.
See AN5289 and https://github.com/micropython/micropython/issues/6316
- Split tables and buffers into SRAM2A/2B.
- Use structs rather than word offsets to access tables.
- Use FLASH_IPCCDBA register value rather than option bytes directly.
This allows `ble.active(1)` to fail correctly if the HCI controller is
unavailable.
It also avoids an infine loop in the NimBLE event handler where NimBLE
doesn't correctly detect that the HCI controller is unavailable and keeps
trying to reset.
Furthermore, it fixes an issue where GATT service registrations were left
allocated, which led to a bad realloc if the stack was activated multiple
times.
MicroPython and NimBLE must be on the same core, for synchronisation of the
BLE ringbuf and the MicroPython scheduler. However, in the current IDF
versions (3.3 and 4.0) there are issues (see e.g. #5489) with running
NimBLE on core 1.
This change - pinning both tasks to core 0 - makes it possible to reliably
run the BLE multitests on esp32 boards.
Generally a controller should either have its own public address hardcoded,
or loaded by the driver (e.g. cywbt43).
However, for a controller that has no public address where you still want a
long-term stable address, this allows you to use a static address generated
by the port. Typically on STM32 this will be an LAA, but a board might
override this.
This commit adds support for using Bluetooth on the unix port via a H4
serial interface (distinct from a USB dongle), with both BTstack and NimBLE
Bluetooth stacks.
Note that MICROPY_PY_BLUETOOTH is now disabled for the coverage variant.
Prior to this commit Bluetooth was anyway not being built on Travis because
libusb was not detected. But now that bluetooth works in H4 mode it will
be built, and will lead to a large decrease in coverage because Bluetooth
tests cannot be run on Travis.
Previously the interaction between the different layers of the Bluetooth
stack was different on each port and each stack. This commit defines
common interfaces between them and implements them for cyw43, btstack,
nimble, stm32, unix.
mp_irq_init() is useful when the IRQ object is allocated by the caller.
The mp_irq_methods_t.init method is not used anywhere so has been removed.
Signed-off-by: Damien George <damien@micropython.org>
This is consistent with the other 'micro' modules and allows implementing
additional features in Python via e.g. micropython-lib's sys.
Note this is a breaking change (not backwards compatible) for ports which
do not enable weak links, as "import sys" must now be replaced with
"import usys".
Verifies mtime timestamps on files match the value returned by time.time().
Also update vfs_fat_ramdisk.py so it doesn't check FAT timestamp of the
root, because that may change across runs/ports.
Signed-off-by: Damien George <damien@micropython.org>
By setting MICROPY_EPOCH_IS_1970 a port can opt to use 1970/1/1 as the
Epoch for timestamps returned by stat(). And this setting is enabled on
the unix and windows ports because that's what they use.
Signed-off-by: Damien George <damien@micropython.org>
On 32-bit builds these stat fields will overflow a small-int, so use
mp_obj_new_int_from_uint to construct the int object.
Signed-off-by: Damien George <damien@micropython.org>
On ports like unix where the Epoch is 1970/1/1 and atime/mtime/ctime are in
seconds since the Epoch, this value will overflow a small-int on 32-bit
systems. So far this is only an issue on 32-bit unix builds that use the
VFS layer (eg dev and coverage unix variants) but the fix (using
mp_obj_new_int_from_uint instead of MP_OBJ_NEW_SMALL_INT) is there for all
ports so as to not complicate the code, and because they will need the
range one day.
Also apply a similar fix to other fields in VfsPosix.stat because they may
also be large.
Signed-off-by: Damien George <damien@micropython.org>
gettimeofday returns seconds since 2000/1/1 so needs to be adjusted to
seconds since 1970/1/1 to give the correct return value of mp_hal_time_ns.
Signed-off-by: Damien George <damien@micropython.org>
This commit fixes the cases when a TCP socket is in STATE_NEW,
STATE_LISTENING or STATE_CONNECTING and recv() is called on it. It now
raises ENOTCONN instead of a random error code due to it previously
indexing beyond the start of error_lookup_table[].
Signed-off-by: Damien George <damien@micropython.org>
Updating to Black v20.8b1 there are two changes that affect the code in
this repository:
- If there is a trailing comma in a list (eg [], () or function call) then
that list is now written out with one line per element. So remove such
trailing commas where the list should stay on one line.
- Spaces at the start of """ doc strings are removed.
Signed-off-by: Damien George <damien@micropython.org>
So they can be skipped if __rOP__'s are not supported on the target. Also
fix the typo in the complex_special_methods.py filename.
Signed-off-by: Damien George <damien@micropython.org>
The memory operation functions read_mem() and write_mem() create a
temporary buffer on the local C stack for the address bytes with the size
of 4 bytes. This buffer is filled in a loop from the user supplied address
and address length. If the user supplied 'addrsize' is bigger than 32, the
local buffer is overrun.
Fix this by raising an exception for invalid 'addrsize' values.
Signed-off-by: Michael Buesch <m@bues.ch>
A configurable result directory is advantageous because it enables
using a dedicated location, eventually outside of the source tree,
instead of forcing the output files into a fixed directory which might
also contain other files already. For that reason the default output
directory also has been changed to tests/results/.
Replace some usages of paths relative to the current working directory
with absolute paths relative to the tests directory.
Fixes and resulting changes:
- default values of MICROPYTHON and MPYCROSS are absolute paths and
always correct
- likewise, the correct full paths for tools and extmod directories
are appended to sys.path
- printing/cleaning failures works properly since it expects the .exp
and .out files in the tests directory which is also where they
are written to now, plus no more need for changing directories
This fixes#5872 and allows running custom tests which use run-tests
without having to cd to the tests directory first, and the test output
still is in the tests/ directory instead of the current working directory.
Discovery of tests and all skip test logic based on paths relative to
the current working directory remains unchanged which essentially means
that for running most of MicroPython's own tests, run-tests must still
be ran from within it's directory, so document that.
With sleep(0.2) a multiple of sleep(0.1), the order of task 2 and 3
execution is not well defined, and depends on the precision of the system
clock and how fast the rest of the code runs. So change 0.2 to 0.18 to
make the test more reliable.
Also fix a typo of t3/t4, and cancel t4 at the end.
Signed-off-by: Damien George <damien@micropython.org>
This adds an additional optional parameter to gap_scan() to select active
scanning, where scan responses are returned as well as normal scan results.
This parameter is False by default which retains the existing behaviour.
The READ_REQUEST callback is handled as a hard interrupt (because the BLE
stack needs an immediate response from it so it can continue) and so calls
to Python require extra protection:
- the caller-owned tuple passed into the callback must be separate from the
tuple used by other callback events (which are soft interrupts);
- the GC and scheduler must be locked during callback execution.
This commit adds support for modification time of files on littlefs v2
filesystems, using file attributes. For some background see issue #6114.
Features/properties of this implementation:
- Only supported on littlefs2 (not littlefs1).
- Uses littlefs2's general file attributes to store the timestamp.
- The timestamp is 64-bits and stores nanoseconds since 1970/1/1 (if the
range to the year 2554 is not enough then additional bits can be added to
this timestamp by adding another file attribute).
- mtime is enabled by default but can be disabled in the constructor, eg:
uos.mount(uos.VfsLfs2(bdev, mtime=False), '/flash')
- It's fully backwards compatible, existing littlefs2 filesystems will work
without reformatting and timestamps will be added transparently to
existing files (once they are opened for writing).
- Files without timestamps will open correctly, and stat will just return 0
for their timestamp.
- mtime can be disabled or enabled each mount time and timestamps will only
be updated if mtime is enabled (otherwise they will be untouched).
Signed-off-by: Damien George <damien@micropython.org>
Otherwise a task that continuously awaits on a large negative sleep can
monopolise the scheduler (because its wake time is always less than
everything else in the pairing heap).
Signed-off-by: Damien George <damien@micropython.org>
As per CPython behaviour, compile(stmt, "file", "single") should create
code which prints to stdout (via __repl_print__) the results of any
expressions in stmt.
Signed-off-by: Damien George <damien@micropython.org>
The existing implementation of mkdir() in this file is not sophisticated
enough to work correctly on all operating systems (eg Mac can raise
EISDIR). Using the standard os.makedirs() function handles all cases
correctly.
Signed-off-by: Damien George <damien@micropython.org>
Prior to this commit, pyboard.py used eval() to "parse" file data received
from the board. Using eval() on received data from a device is dangerous,
because a malicious device may inject arbitrary code execution on the PC
that is doing the operation.
Consider the following scenario:
Eve may write a malicious script to Bob's board in his absence. On return
Bob notices that something is wrong with the board, because it doesn't work
as expected anymore. He wants to read out boot.py (or any other file) to
see what is wrong. What he gets is a remote code execution on his PC.
Proof of concept:
Eve:
$ cat boot.py
_print = print
print = lambda *x, **y: _print("os.system('ls /; echo Pwned!')", end="\r\n\x04")
$ ./pyboard.py -f cp boot.py :
cp boot.py :boot.py
Bob:
$ ./pyboard.py -f cp :boot.py /tmp/foo
cp :boot.py /tmp/foo
bin chroot dev home lib32 media opt root sbin sys usr
boot config etc lib lib64 mnt proc run srv tmp var
Pwned!
There's also the possibility that the device is malfunctioning and sends
random and possibly dangerous data back to the PC, to be eval'd.
Fix this problem by using ast.literal_eval() to parse the received bytes,
instead of eval().
Signed-off-by: Michael Buesch <m@bues.ch>
Prior to this commit, if you configure a pin as an output type (I2C in this
example) and then later configure it back as an input, then it will report
the type incorrectly. Example:
>>> import machine
>>> b6 = machine.Pin('B6')
>>> b6
Pin(Pin.cpu.B6, mode=Pin.IN)
>>> machine.I2C(1)
I2C(1, scl=B6, sda=B7, freq=420000)
>>> b6
Pin(Pin.cpu.B6, mode=Pin.ALT_OPEN_DRAIN, pull=Pin.PULL_UP, af=Pin.AF4_I2C1)
>>> b6.init(machine.Pin.IN)
>>> b6
Pin(Pin.cpu.B6, mode=Pin.ALT_OPEN_DRAIN, af=Pin.AF4_I2C1)
With this commit the last print now works:
>>> b6
Pin(Pin.cpu.B6, mode=Pin.IN)
Latest versions of Sphinx (at least 3.1.0) do not need the `*` escaped and
will render the `\` in the output if it is there, so remove it.
Fixes issue #6209.
Adds a job to build the zephyr port in CI using the same docker container
that the zephyr project uses for its own CI.
Always make clean zephyr builds to ensure we don't just rebuild C code, but
we also rebuild Kconfig and dts. This is required when switching between
boards, which have different Kconfigs and device trees.
Uses the tagged zephyr 2.3.0 release.
Signed-off-by: Maureen Helm <maureen.helm@nxp.com>
Include storage/flash_map.h unconditionally so we always have access to the
FLASH_AREA_LABEL_EXISTS macro, even if CONFIG_FLASH_MAP is not defined.
This fixes a build error for the qemu_x86 board:
main.c:108:63: error: missing binary operator before token "("
108 | #elif defined(CONFIG_FLASH_MAP) && FLASH_AREA_LABEL_EXISTS(storage)
| ^
../../py/mkrules.mk:88: recipe for target 'build/genhdr/qstr.i.last' failed
Signed-off-by: Maureen Helm <maureen.helm@nxp.com>
mp_reader_new_file() is used to read in files for importing, either .py or
.mpy files, for the lexer and persistent code loader respectively. In both
cases the file should be opened in raw bytes mode: the lexer handles
unicode characters itself, and .mpy files contain 8-bit bytes by nature.
Before this commit importing was working correctly because, although the
file was opened in text mode, all native filesystem implementations (POSIX,
FAT, LFS) would access the file in raw bytes mode via mp_stream_rw()
calling mp_stream_p_t.read(). So it was only an issue for non-native
filesystems, such as those implemented in Python. For Python-based
filesystem implementations, a call to mp_stream_rw() would go via IOBase
and then to readinto() at the Python level, and readinto() is only defined
on files opened in raw bytes mode.
Signed-off-by: Damien George <damien@micropython.org>
If mpy-cross exits with an error be sure to print that error in a way that
is readable, instead of a long bytes object.
Signed-off-by: Damien George <damien@micropython.org>
On ports where normal heap memory can contain executable code (eg ARM-based
ports such as stm32), native code loaded from an .mpy file may be reclaimed
by the GC because there's no reference to the very start of the native
machine code block that is reachable from root pointers (only pointers to
internal parts of the machine code block are reachable, but that doesn't
help the GC find the memory).
This commit fixes this issue by maintaining an explicit list of root
pointers pointing to native code that is loaded from an .mpy file. This
is not needed for all ports so is selectable by the new configuration
option MICROPY_PERSISTENT_CODE_TRACK_RELOC_CODE. It's enabled by default
if a port does not specify any special functions to allocate or commit
executable memory.
A test is included to test that native code loaded from an .mpy file does
not get reclaimed by the GC.
Fixes#6045.
Signed-off-by: Damien George <damien@micropython.org>
All imports are now tested to see if the test should be skipped,
UserFile.read is removed, and UserFile.readinto is made more efficient.
Signed-off-by: Damien George <damien@micropython.org>
These tests are specific to MicroPython so have a better home in the
micropython/ test subdir, and putting them here allows them to be run by
all targets, not just those that have access to the local filesystem (eg
the unix port).
Signed-off-by: Damien George <damien@micropython.org>
It raises on EOFError instead of an IncompleteReadError (which is what
CPython does). But the latter is derived from EOFError so code compatible
with MicroPython and CPython can be written by catching EOFError (eg see
included test).
Fixes issue #6156.
Signed-off-by: Damien George <damien@micropython.org>
The SCSI driver calls GetCapacity to get the block size and number of
blocks of the underlying block-device/LUN. It caches these values and uses
them later on to verify that reads/writes are within the bounds of the LUN.
But, prior to this commit, there was only one set of cached values for all
LUNs, so the bounds checking for a LUN could use incorrect values, values
from one of the other LUNs that most recently updated the cached values.
This would lead to failed SCSI requests.
This commit fixes this issue by having separate cached values for each LUN.
Signed-off-by: Damien George <damien@micropython.org>
MicroPython's original implementation of __aiter__ was correct for an
earlier (provisional) version of PEP492 (CPython 3.5), where __aiter__ was
an async-def function. But that changed in the final version of PEP492 (in
CPython 3.5.2) where the function was changed to a normal one. See
https://www.python.org/dev/peps/pep-0492/#why-aiter-does-not-return-an-awaitable
See also the note at the end of this subsection in the docs:
https://docs.python.org/3.5/reference/datamodel.html#asynchronous-iterators
And for completeness the BPO: https://bugs.python.org/issue27243
To be consistent with the Python spec as it stands today (and now that
PEP492 is final) this commit changes MicroPython's behaviour to match
CPython: __aiter__ should return an async-iterable object, but is not
itself awaitable.
The relevant tests are updated to match.
See #6267.
Enabling the following features for all targets, except for nrf51
targets compiled to be used with SoftDevice:
- MICROPY_PY_ARRAY_SLICE_ASSIGN
- MICROPY_PY_SYS_STDFILES
- MICROPY_PY_UBINASCII
Splitting mpconfigport.h into multiple device specific
files in order to facilitate variations between devices.
Due to the fact that the devices might have variations in
features and also variations in flash size it makes sense
that some devices offers more functionality than others
without being limited by restricted devices.
For example more micropython features can be activated for
nrf52840 with 1MB flash, compared to nrf51 with 256KB.
Fixing 98e583430f, the semantics of strncpy
require that the remainder of dst be filled with null bytes.
Signed-off-by: Damien George <damien@micropython.org>
This code is imported from musl, to match existing code in libm_dbl.
The file is also added to the build in stm32/Makefile. It's not needed by
the core code but, similar to c5cc64175b,
allows round() to be used by user C modules or board extensions.
Because the argument arrays may overlap, as show by the new tests in this
commit.
Also remove the debugging comments for these macros, add a new comment
about overlapping regions, and separate the macros by blank lines to make
them easier to read.
Fixes issue #6244.
Signed-off-by: Damien George <damien@micropython.org>
Changes in this new library version are:
- Update H7 HAL to v1.6.0.
- Update WB HAL to v1.6.0.
- Add patches to fix F4 ll_uart clock selection for UART9/UART10.
Signed-off-by: Damien George <damien@micropython.org>
A previous commit 3a9d948032 can cause
lock-ups of the RMT driver, so this commit reverses that, adds a loop_en
flag, and explicitly controls the TX interrupt in write_pulses(). This
provides correct looping, non-blocking writes and sensible behaviour for
wait_done().
See also #6167.
The file `mbedtls_errors/mp_mbedtls_errors.c` can be used instead of
`mbedtls/library/error.c` to give shorter error strings, reducing the build
size of the error strings from about 12-16kB down to about 2-5kB.
This commit adds human readable error messages when mbedtls or axtls raise
an exception. Currently often just an EIO error is raised so the user is
lost and can't tell whether it's a cert error, buffer overrun, connecting
to a non-ssl port, etc. The axtls and mbedtls error raising in the ussl
module is modified to raise:
OSError(-err_num, "error string")
For axtls a small error table of strings is added and used for the second
argument of the OSErrer. For mbedtls the code uses mbedtls' built-in
strerror function, and if there is an out of memory condition it just
produces OSError(-err_num). Producing the error string for mbedtls is
conditional on them being included in the mbedtls build, via
MBEDTLS_ERROR_C.
This commit adds the IRQ_GATTS_INDICATE_DONE BLE event which will be raised
with the status of gatts_indicate (unlike notify, indications require
acknowledgement).
An example of its use is added to ble_temperature.py, and to the multitests
in ble_characteristic.py.
Implemented for btstack and nimble bindings, tested in both directions
between unix/btstack and pybd/nimble.
The goal of this commit is to allow using ble.gatts_notify() at any time,
even if the stack is not ready to send the notification right now. It also
addresses the same issue for ble.gatts_indicate() and ble.gattc_write()
(without response). In addition this commit fixes the case where the
buffer passed to write-with-response wasn't copied, meaning it could be
modified by the caller, affecting the in-progress write.
The changes are:
- gatts_notify/indicate will now run in the background if the ACL buffer is
currently full, meaning that notify/indicate can be called at any time.
- gattc_write(mode=0) (no response) will now allow for one outstanding
write.
- gattc_write(mode=1) (with response) will now copy the buffer so that it
can't be modified by the caller while the write is in progress.
All four paths also now track the buffer while the operation is in
progress, which prevents the GC free'ing the buffer while it's still
needed.
It might compile fine with S132 as SoftDevice for nRF52840.
However, there might be different hardware on the SoC which in
turn could make it fail.
SoftDevice S140 is correct BLE stack for nRF52840 SoC and would
provide a more accurate test build.
So that micropython-dev can be used to test VFS code, and inspect and build
filesystem images that are compatible with bare-metal systems.
Signed-off-by: Damien George <damien@micropython.org>
This commit adds time.ticks_ms/us support using RTC1 as the timebase. It
also adds the time.ticks_add/diff helper functions. This feature can be
enabled using MICROPY_PY_TIME_TICKS. If disabled the system uses the
legacy sleep methods and does not have any ticks functions.
In addition support for MICROPY_EVENT_POLL_HOOK was added to the
time.sleep_ms(x) function, making this function more power efficient and
allows support for select.poll/asyncio. To support this, the RTC's CCR0
was used to schedule a ~1msec event to wakeup the CPU.
Some important notes about the RTC timebase:
- Since the granularity of RTC1's ticks are approx 30usec, time.ticks_us is
not perfect, does not have 1us resolution, but is otherwise quite usable.
For tighter measurments the ticker's 1MHz counter should be used.
- time.ticks_ms(x) should *not* be called in an IRQ with higher prio than
the RTC overflow irq (3). If so it introduces a race condition and
possibly leads to wrong tick calculations.
See #6171 and #6202.
When compiling for debug (-O0) the .text segment cannot fit the flash
region when MICROPY_ROM_TEXT_COMPRESSION=1, because the compiler does not
optimise away the large if-else chain used to select the correct compressed
string.
This commit enforces MICROPY_ROM_TEXT_COMPRESSION=0 when compiling for
debug (DEBUG=1).
The storage space of the advertisement name is not declared static, leading
to a random advertisement name. This commit fixes the issue by declaring
it static.
The Bluetooth link gets disconnected when connecting from a PC after 30-40
seconds. This commit adds handling of the data length update request. The
data length parameter pointer is set to NULL in the reply, letting the
SoftDevice automatically set values and use them in the data length update
procedure.
mp_keyboard_interrupt() triggers a compiler error because the function is
implicitly declared. This commit adds "py/runtime.h" to the includes.
Fixes issue #5732.
Changes are:
- The default manifest.py is moved to the variants directory (it's in
"boards" in other ports).
- The coverage variant now uses a custom manifest in its variant directory
to add frzmpy/frzstr.
- The frzmpy/frzstr tests are moved to variants/coverage/.
This reverts commit 4d6f60d428.
This implementation used the timeout as a maximum amount of time needed for
the operation, when actually the spec and other tools suggest that it's the
minumum delay needed between subsequent USB transfers.
Polling mode will cause failures with the mass-erase command due to USB
timeouts, because the USB IRQs are not being serviced. Swiching from
polling to IRQ mode fixes this because the USB IRQs can be serviced between
page erases.
Note that when the flash is being programmed or erased the MCU is halted
and cannot respond to USB IRQs, because mboot runs from flash, as opposed
to the built-in bootloader which is in system ROM. But the maximum delay
in responding to an IRQ is the time taken to erase a single page, about
100ms for large pages, and that is short enough that the USB does not
timeout on the host side.
Recent tests have shown that in the current mboot code IRQ mode is pretty
much the same speed as polling mode (within timing error), code size is
slightly reduced in IRQ mode, and IRQ mode idles at about half of the power
consumption as polling mode.
This is treated more like a "delay before continuing" in the spec and
official tools and does not appear to be really needed. In particular,
downloading firmware is much slower with non-zero timeouts because the host
must pause by the timeout between sending each DFU_GETSTATUS to poll for
download/erase complete.
This commit fixes lookups of class members to make it so that built-in
functions that are used as methods/functions of a class work correctly.
The mp_convert_member_lookup() function is pretty much completely changed
by this commit, but for the most part it's just reorganised and the
indenting changed. The functional changes are:
- staticmethod and classmethod checks moved to later in the if-logic,
because they are less common and so should be checked after the more
common cases.
- The explicit mp_obj_is_type(member, &mp_type_type) check is removed
because it's now subsumed by other, more general tests in this function.
- MP_TYPE_FLAG_BINDS_SELF and MP_TYPE_FLAG_BUILTIN_FUN type flags added to
make the checks in this function much simpler (now they just test this
bit in type->flags).
- An extra check is made for mp_obj_is_instance_type(type) to fix lookup of
built-in functions.
Fixes#1326 and #6198.
Signed-off-by: Damien George <damien@micropython.org>
Supports hard and soft interrupts. In the current implementation, soft
interrupt callbacks will only be called when the VM is executing, ie they
will not be called during a blocking kernel call like k_msleep. And the
behaviour of hard interrupt callbacks will depend on the underlying device,
as well as the amount of ISR stack space.
Soft and hard interrupts tested on frdm_k64f and nucleo_f767zi boards.
Signed-off-by: Damien George <damien@micropython.org>
So it can be unconditionally included in a port's build even if certain
configurations in that port do not use its features, to simplify the
Makefile.
Signed-off-by: Damien George <damien@micropython.org>
The implementation internally uses sector erase to wipe everything except
the sector(s) that mboot lives in (by erasing starting from
APPLICATION_ADDR).
The erase command can take some time (eg an STM32F765 with 2MB of flash
takes 8 to 10 seconds). This time is normally enough to make pydfu.py fail
with a timeout. The DFU standard includes a mechanism for the DFU device
to request a longer timeout as part of the get-status response just before
starting an operation. This timeout functionality has been implemented
here.
Before this commit the USB VCP TX ring-buffer used the basic implementation
where it can only be filled to a maximum of buffer size-1. For a 1024 size
buffer this means the largest packet that can be sent is 1023. Once a
packet of this size is sent the next byte copied in goes to the final byte
in the buffer, so must be sent as a 1 byte packet before the read pointer
can be wrapped around to the beginning. So in large streaming transfers,
watching the USB sniffer you basically get alternating 1023 byte packets
then 1 byte packets.
This commit changes the ring-buffer implementation to a scheme that doesn't
have the full-size limitation, and the USB VCP driver can now achieve a
constant stream of full-sized packets. This scheme introduces a
restriction on the size of the buffer: it must be a power of 2, and the
maximum size is half of the size of the index (in this case the index is
16-bit, so the maximum size would be 32767 bytes rounded to 16384 for a
power-of-2). But this is not a big limitation because the size of the
ring-buffer prior to this commit was restricted to powers of 2 because it
was using a mask-based method to wrap the indices.
For an explanation of the new scheme see
https://www.snellman.net/blog/archive/2016-12-13-ring-buffers/
The RX buffer could likely do with a similar change, though as it's not
read from in chunks like the TX buffer it doesn't present the same issue,
all that's lost is one byte capacity of the buffer.
USB VCP TX throughput is improved by this change, potentially doubling the
speed in certain cases.
This allows complex binary operations to fail gracefully with unsupported
operation rather than raising an exception, so that special methods work
correctly.
Signed-off-by: Damien George <damien@micropython.org>
uint types in viper mode can now be used for all binary operators except
floor-divide and modulo.
Fixes issue #1847 and issue #6177.
Signed-off-by: Damien George <damien@micropython.org>
By passing through the I2C instance to the application callbacks, the
application can implement multiple I2C slave devices on different
peripherals (eg I2C1 and I2C2).
This commit also adds a proper rw argument to i2c_slave_process_addr_match
for F7/H7/WB MCUs, and enables the i2c_slave_process_tx_end callback.
Mboot is also updated for these changes.
Signed-off-by: Damien George <damien@micropython.org>
Mboot now supports FAT, LFS1 and LFS2 filesystems, to load firmware from.
The filesystem needed by the board must be explicitly enabled by the
configuration variables MBOOT_VFS_FAT, MBOOT_VFS_LFS1 and MBOOT_VFS_LFS2.
Boards that previously used FAT implicitly (with MBOOT_FSLOAD enabled) must
now add the following config to mpconfigboard.h:
#define MBOOT_VFS_FAT (1)
Signed-off-by: Damien George <damien@micropython.org>
This commit factors the code for files and streaming to separate source
files (vfs_fat.c and gzstream.c respectively) and introduces an abstract
gzstream interface to make it easier to plug in different filesystems.
Signed-off-by: Damien George <damien@micropython.org>
There's no need to do a directory listing to search for the given firmware
filename, it just takes extra time and code size. Instead this commit
changes it so that the requested firmware file is opened immediately and
will abort if the file couldn't be opened. This also allows to specify
files in a directory.
Signed-off-by: Damien George <damien@micropython.org>
Previously, if FAT was not enabled but LFS1/2 was then MICROPY_PY_IO_FILEIO
would be disabled and file binary-mode was not supported.
Signed-off-by: Damien George <damien@micropython.org>
This adds support for freezing an entire directory while keeping the
directory as part of the import path. For example
freeze("path/to/library", "module")
will recursively freeze all scripts in "path/to/library/module" and have
them importable as "from module import ...".
Signed-off-by: Damien George <damien@micropython.org>
An OrderedDict can now be used for the locals when creating a type
explicitly via type(name, bases, locals).
Signed-off-by: Damien George <damien@micropython.org>
Commit 8675858465 switched to using the CMSIS
provided SystemInit function which sets VTOR to 0x00000000 (previously it
was 0x08000000). A VTOR of 0x00000000 will be correct on some MCUs but not
on others where the built-in bootloader is remapped to this address, via
__HAL_SYSCFG_REMAPMEMORY_SYSTEMFLASH().
To make sure mboot has the correct vector table, this commit explicitly
sets VTOR to the correct value of 0x08000000.
Signed-off-by: Damien George <damien@micropython.org>
There's no need to duplicate this functionality in mboot, the code provided
in stm32lib/CMSIS does the same thing and makes it easier to support other
MCU series.
Signed-off-by: Damien George <damien@micropython.org>
The flash functions in ports/stm32/flash.c are almost identical to those in
ports/stm32/mboot/main.c, so remove the duplicated code in mboot and use
instead the main stm32 code. This also allows supporting other MCU series.
Signed-off-by: Damien George <damien@micropython.org>
This commit makes the low-level flash C functions usable by code other than
flashbdev.c (eg by mboot). Changes in this commit are:
- flash_erase() and flash_write() now return an errno error code, a
negative value on error.
- flash_erase() now automatically locks the flash, as well as unlocking it.
- flash_write() now automatically unlocks the flash, as well as locking it.
- flashbdev.c is modified for the above changes.
Signed-off-by: Damien George <damien@micropython.org>
irq.h is included by py/mphal.h but it's better to be explicit, eg if mboot
uses powerctrlboot.c.
Signed-off-by: Damien George <damien@micropython.org>
The irq.h file now just provides low-level IRQ definitions and priorities.
All Python binding definitions are moved to modmachine.h, with some
renaming of pyb -> machine, and also the machine_idle definition (was
pyb_wfi) is moved to modmachine.c.
The cc3200 and teensy ports are updated to build with these changes.
Signed-off-by: Damien George <damien@micropython.org>
Eventually it would be good to run the full test suite on a big-endian
system, but for now this will help to catch build errors with the
big-endian configuration.
Signed-off-by: Damien George <damien@micropython.org>
Otherwise the RMT will repeat pulses when using loop(True). This repeating
is due to a bug in the IDF which will be fixed in an upcoming release, but
for now the accepted workaround is to swap these calls, which should still
work in the fixed version of the IDF.
Fixes issue #6167.
With only `sp_func_proto_paren = remove` set there are some cases where
uncrustify misses removing a space between the function name and the
opening '('. This sets all of the related options to `force` as well.
There are a maximum of 8 USB endpoints and each has 2 buffer slots
(in/out). This commit add support for up to 8 endpoints and adds FIFO
configuration for USB profiles with 2xVCP on MCUs that have device-only USB
peripherals.
Tested on NUCLEO_WB55 in 2xVCP, 2xVCP+MSC and 2xVCP+MSC+HID mode.
Signed-off-by: Damien George <damien@micropython.org>
The ESP32 RMT peripheral has hardware support for a carrier frequency, and
this commit exposes it to Python with the keyword arguments carrier_freq
and carrier_duty_percent in the constructor. Example usage:
r = esp32.RMT(0, pin=Pin(2), clock_div=80, carrier_freq=38000, carrier_duty_percent=50)
This addition to the grammar was introduced in Python 3.6. It allows
annotating the type of a varilable, like:
x: int = 123
s: str
The implementation in this commit is quite simple and just ignores the
annotation (the int and str bits above). The reason to implement this is
to allow Python 3.6+ code that uses this feature to compile under
MicroPython without change, and for users to use type checkers.
In the future viper could use this syntax as a way to give types to
variables, which is currently done in a bit of an ad-hoc way, eg
x = int(123). And this syntax could potentially be used in the inline
assembler to define labels in an way that's easier to read.
The syntax matches CPython and the semantics are equivalent except that,
unlike CPython, MicroPython allows using := to assign to comprehension
iteration variables, because disallowing this would take a lot of code to
check for it.
The new compile-time option MICROPY_PY_ASSIGN_EXPR selects this feature and
is enabled by default, following MICROPY_PY_ASYNC_AWAIT.
This is the result of running...
uncrustify -c tools/uncrustify.cfg --update-config-with-doc -o tools/uncrustify.cfg
...with some manual fixups to correct places where it changed things it
should not have.
Essentially it just adds new config parameters introduced in v0.71.0
with their default values.
Signed-off-by: David Lechner <david@pybricks.com>
Formatting for `* sizeof` was fixed in uncrustify v0.71, so we no longer
need the fixups for it. Also, there was one file where the updated
uncrustify caught a problem that the regex didn't pick up, which is updated
in this commit.
Signed-off-by: David Lechner <david@pybricks.com>
MicroPython already requires contributors to implicitly sign-off on a set
of points, which are listed in CODECONVENTIONS.md.
This commit adjusts this wording to allow explicit sign-off using the git
"Signed-off-by:" feature. There is no reference made to
https://developercertificate.org/ because the project already has its own
version of this.
Signed-off-by: Damien George <damien@micropython.org>
Zephyr renamed CONFIG_FLOAT to CONFIG_FPU to better reflect its semantics
of enabling the hardware floating point unit (FPU) rather than enabling
toolchain-level floating point support (i.e., software floating point for
FPU-less socs).
Zephyr deprecated and then removed its stack_analyze function because it
was unsafe. Use the new zephyr thread analyzer instead and rename the
MicroPython function to zephyr.thread_analyze() to be more consistent with
the implementation.
Tested on mimxrt1050_evk.
The output now looks like this:
>>> zephyr.thread_analyze()
Thread analyze:
80004ff4 : unused 400 usage 112 / 512 (21 %)
rx_workq : unused 1320 usage 180 / 1500 (12 %)
tx_workq : unused 992 usage 208 / 1200 (17 %)
net_mgmt : unused 656 usage 112 / 768 (14 %)
sysworkq : unused 564 usage 460 / 1024 (44 %)
idle : unused 256 usage 64 / 320 (20 %)
main : unused 2952 usage 1784 / 4736 (37 %)
This patch adds quickref documentation for the change in commit
afd0701bf7. This commit added the ability to
disable the REPL and hence use UART0 for serial communication on the
esp8266, but was not previously documented anywhere.
The text is largely taken from the commit message, with generic information
on using the UART duplicated from the Wipy quickref document.
For example, to run the BLE multitests entirely with the unix port:
env MICROPY_MICROPYTHON=../ports/unix/micropython-dev ./run-multitests.py \
-i micropython,MICROPYBTUSB=01 \
-i micropython,MICROPYBTUSB=02:02 \
multi_bluetooth/ble_*.py
Long ago, prior to 0ef01d0a75, fixed and
ordered maps were the same setting with the "table_is_fixed_array" member
of mp_map_t. But these settings are actually independent, and it is
possible to have is_fixed=1, is_ordered=0 (although this can currently
only be done by tools/cc1). So update the comments to reflect this.
The resulting dict is now marked as read-only (is_fixed=1) to enforce the
fact that changes to this dict will not be reflected in the class instance.
This commit reduces code size by about 20 bytes, and should be more
efficient because it creates a direct copy of the dict rather than
reinserting all elements.
The behavior mirrors the instance object dict attribute where a copy of the
local attributes are provided (unless the dict is read-only, then that dict
itself is returned, as an optimisation). MicroPython does not support
modifying this dict because the changes will not be reflected in the class.
The feature is only enabled if MICROPY_CPYTHON_COMPAT is set, the same as
the instance version.
Sending more than 64 bytes to the USB CDC endpoint in HS mode will lead to
a hard crash. This commit fixes the issue, although there may be a better
fix from upstream TinyUSB in the future.
This enables warnings as errors and fixes all current errors, namely:
- reference to terms in the glossary must now be explicit (:term:)
- method overloads must not be declared as a separate method or must
use :noindex:
- 2 cases where `` should have been used instead of `
With this commit the code should work correctly regardless of the size of
StackType_t (it's actually 1 byte in size for the esp32's custom FreeRTOS).
Fixes issue #6072.
According to Supplement to the Bluetooth Core Specification v8 Part A
1.3.1, to support BR/EDR the code should set the fifth bit (Simultaneous LE
and BR/EDR to Same Device Capable (Controller)) and fourth bit
(Simultaneous LE and BR/EDR to Same Device Capable (Host)) of the flag.
The ring buffer previously used a single unsigned byte field to save the
length, meaning that it would overflow for large characteristic value
responses.
With this commit it now use a 16-bit length instead and has code to
explicitly truncate at UINT16_MAX (although this should be impossible to
achieve in practice).
This commit makes sure that all discovery complete and read/write status
events set the status to zero on success.
The status value will be implementation-dependent on non-success cases.
On btstack there's no status associated with the read result, it comes
through as a separate event. This allows you to detect read failures or
timeouts.
There doesn't appear to be any use for only triggering on specific events,
so it's just easier to number them sequentially. This makes them smaller
values so they take up only 1 byte in the ringbuf, only 1 byte for the
opcode in the bytecode, and makes room for more events.
Also add a couple of new event types that need to be implemented (to avoid
re-numbering later).
And rename _COMPLETE and _STATUS to _DONE for consistency.
In the future the "trigger" keyword argument can be reinstated by requiring
the user to compute the bitmask, eg:
ble.irq(handler, 1 << _IRQ_SCAN_RESULT | 1 << _IRQ_SCAN_DONE)
This commit implements an LED class with rudimentary parts of a pin C API
to support it. The LED class does not yet support setting an intensity.
This LED class is put in the machine module for the time being, until a
better place is found.
One LED is supported on TEENSY40 and MIMXRT1010_EVK boards.
Changes are:
- string0 is no longer built when building for host as the target, because
it'll be provided by the system libc and may in some cases clash with the
system one (eg on OSX).
- mp_int_t/mp_uint_t are defined in terms of intptr_t/uintptr_t to support
both 32-bit and 64-bit builds.
- Configuration values which are the default in py/mpconfig.h are removed
from mpconfigport.h to make the configuration a bit more minimal, eg as
a better starting point for new ports.
This adds a new command line option `-v` to `tools/codeformat.py` to enable
verbose printing of all files that are scanned.
Normally `uncrustify` and `black` are called with the `-q` option so
setting verbose suppresses the `-q` option and on `black` also enables the
`-v` option which makes it print out all file names matching the filter
similar to how `uncrustify` does by default.
The powerpc port can be built with two different UART drivers, so build
both in CI.
The default compiler is now powerpc64le-linux-gnu- so it does not need to
be specified on the command line.
Microwatt may have firmware that places data in r3, which was used to
detect microwatt vs powernv. This breaks the existing probing of the UART
type in this powerpc port.
Instead build only the appropriate UART into the firmware, selected by
passing the option UART=potato or UART=lpc_serial to the Makefile.
A future enhancement would be to parse the device tree and configure
MicroPython based on the settings.
The code previously called rtc_get_reset_reason which is a "raw" reset
cause. The ESP-IDF massages that for the proper reset cause available from
esp_reset_reason.
Fixes issue #5134.
This suppresses the Parsing: <file> as language C lines. This makes
parsing run a bit faster and on CI it makes for less scrolling through logs
(and black already uses the -q option).
Add configuration which otherwise has to be set via the UI so the file is
more self-contained, and remove configuration which is not needed because
it's the same as the default. The major change here is that for a while
now Appveyor has been using Visual Studio 2015 by default while we still
want to support 2013.
Older implementations deal with infinity/negative zero incorrectly. This
commit adds generic fixes that can be enabled by any port that needs them,
along with new tests cases.
Otherwise functions like memset might get optimised to call themselves (eg
with gcc 10). And provide CFLAGS_BUILTIN so these options can be changed
by a port if needed.
Fixes issue #6053.
The linker script was included in the "$^" inputs, causing the build to
fail:
LINK build/firmware.elf
powerpc64le-linux-gnu-ld: error: linker script file 'powerpc.lds' appears multiple times
As a fix the linker script is left as a dependency of the elf, but only the
object files are linked.
The PWM driver uses a double buffer for the PWM timing array, one in
current use and the other one to update when changing duty parameters.
The issue was that once the duty parameters were changed the updated buffer
was applied immediately without synchronising to the start of the PWM
period. By moving the buffer toggling/swapping to the interrupt when the
cycle is done there are no more glitches.
Specifically:
- pca10040: It was already the default.
- microbit: It uses nRF51, has a Cortex-M0, and has additional libraries.
- pca10056: It has USB CDC.
Commit 6cea369b89 updated the TinyUSB
submodule to a version based on nrfx v2.0.0. This commit updates the nrf
port to work with the latest TinyUSB and nrfx v2.0.0.
Because it can confuse older versions of gcc. Instead use the correct
instruction for Thumb vs Thumb-2 (sub vs subs) so the assembler emits the
2-byte instruction.
Related to commit 1aa9ff9141.
Prior to e0905e85a7 it was possible to
disable btree support on build. This patch allows to configure btree
support on make again and also the two new introduced options for FAT and
LFS2 filesystems.
Before this change, any NimBLE error that does not appear in the
ble_hs_err_to_errno_table maps to return code 0, meaning success. If we
miss adding an error code to the table we end up returning success in case
of failure.
Instead, handle the zero case explicitly and default to MP_EIO. This
allows removing the now-redundant MP_EIO entries from the mapping.
This commit allows the user to set/get the GAP device name used by service
0x1800, characteristic 0x2a00. The usage is:
BLE.config(gap_name="myname")
print(BLE.config("gap_name"))
As part of this change the compile-time setting
MICROPY_PY_BLUETOOTH_DEFAULT_NAME is renamed to
MICROPY_PY_BLUETOOTH_DEFAULT_GAP_NAME to emphasise its link to GAP and this
new "gap_name" config value. And the default value of this for the NimBLE
bindings is changed from "PYBD" to "MPY NIMBLE" to be more generic.
This commit fixes the behaviour of socket.getaddrinfo on the ESP32 so it
raises an OSError when the name resolution fails instead of returning a []
or a resolution for 0.0.0.0.
Tests are added (generic and ESP32-specific) to verify behaviour consistent
with CPython, modulo the different types of exceptions per MicroPython
documentation.
The ones that are moved out of iRAM should not need to be there, because
either they call functions in iROM (eg mp_hal_stdout_tx_str), or they are
only ever called from a function in iROM and not from an interrupt (eg
ets_esf_free_bufs).
This frees up about 800 bytes of iRAM.
If the new name start with '/', cur_dir is not prepened any more, so that
the current working directory is respected. And extend the test cases for
rename to cover this functionality.
This change scans for '.', '..' and multiple '/' and normalizes the new
path name. If the resulting path does not exist, an error is raised.
Non-existing interim path elements are ignored if they are removed during
normalization.
This fixes the bug, that stat(filename) would not consider the current
working directory. So if e.g. the cwd is "lib", then stat("main.py") would
return the info for "/main.py" instead of "/lib/main.py".
The zephyr build system supports merging application-level board
configurations, so there is no need to reproduce this functionality in
MicroPython.
If CONF_FILE is not explicitly set, then the zephyr build system looks for
prj.conf in the application directory. Therefore we rename the MicroPython
prj_base.conf to prj.conf.
Furthermore, if the zephyr build system finds boards/$(BOARD).conf in the
application directory, it merges that configuration with prj.conf.
Therefore we rename all the MicroPython board .conf files and move them
into a boards/ directory.
The minimal configuration, prj_minimal.conf, is left in the application
directory because it is used as an explicitly set CONF_FILE in
make-minimal.
On arm64 with CPython:
>>> _thread.stack_size(32*1024)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: size not valid: 32768 bytes
So increase the CPython value in the test to 512k so it runs on more
systems (on modern Linux the default stack size is usually 8MB).
Constant expression like "2 ** 3" will now be folded, and the special form
"X = const(2 ** 3)" will now compile because the argument to the const is
now a constant.
Fixes issue #5865.
This commit adds several small items to improve the support for OTA
updates on an esp32:
- a partition table for 4MB flash modules that has two OTA partitions ready
to go to do updates
- a GENERIC_OTA board that uses that partition table and that enables
automatic roll-back in the bootloader
- a new esp32.Partition.mark_app_valid_cancel_rollback() class-method to
signal that the boot is successful and should not be rolled back at the
next reset
- an automated test for doing an OTA update
- documentation updates
For ports that have a system malloc which is not garbage collected (eg
unix, esp32), the stream object for the DB must be retained separately to
prevent it from being reclaimed by the MicroPython GC (because the
berkeley-db library uses malloc to allocate the DB structure which stores
the only reference to the stream).
Although in some cases the user code will explicitly retain a reference to
the underlying stream because it needs to call close() on it, this is not
always the case, eg in cases where the DB is intended to live forever.
Fixes issue #5940.
GPIO interrupts can occur when the flash ROM cache is in use and so the
GPIO interrupt handler must be in iRAM. This commit moves the handler to
iRAM, and also moves mp_sched_schedule to iRAM which is called by
pin_intr_handler.
As part of this fix the Pin class can no longer support hard=True in the
Pin.irq() method, because the VM and runtime are too big to put in iRAM.
Fixes#5714.
Explicitly add the repository as upstream and fetch the master commit.
This makes this bare-arm/minimal job more robust when finding the
fork-point of a PR relative to upstream/master (especially for forks of
this repo).
No functionality change is intended with this commit, it just consolidates
the separate implementations of GC helper code to the lib/utils/ directory
as a general set of helper functions useful for any port. This reduces
duplication of code, and makes it easier for future ports or embedders to
get the GC implementation correct.
Ports should now link against gchelper_native.c and either gchelper_m0.s or
gchelper_m3.s (currently only Cortex-M is supported but other architectures
can follow), or use the fallback gchelper_generic.c which will work on
x86/x64/ARM.
The gc_helper_get_sp function from gchelper_m3.s is not really GC related
and was only used by cc3200, so it has been moved to that port and renamed
to cortex_m3_get_sp.
But only when bluetooth is enabled, i.e. if building the dev or coverage
variants, and we have libusb available.
Update travis to match, i.e. specify the variant when doing
`make submodules`.
One can now use `-i micropython` and `-i cpython` to add instances using
the `MICROPYTHON` and `CPYTHON3` variables (which can be overridden by env
vars).
This commit adds full support to the unix port for Bluetooth using the
common extmod/modbluetooth Python bindings. This uses the libusb HCI
transport, which supports many common USB BT adaptors.
This allows user code that inherits from uio.IOBase to return an errno
error code from the user readinto/write function, by returning a negative
value. Eg returning -123 means an errno of 123. This is already how the
custom ioctl works.
This change is made for two reasons:
1. A 3rd-party library (eg berkeley-db-1.xx, axtls) may use the system
provided errno for certain errors, and yet MicroPython stream objects
that it calls will be using the internal mp_stream_errno. So if the
library returns an error it is not known whether the corresponding errno
code is stored in the system errno or mp_stream_errno. Using the system
errno in all cases (eg in the mp_stream_posix_XXX wrappers) fixes this
ambiguity.
2. For systems that have threading the system-provided errno should always
be used because the errno value is thread-local.
For systems that do not have an errno, the new lib/embed/__errno.c file is
provided.
Note: the uncrustify configuration is explicitly set to 'add' instead of
'force' in order not to alter the comments which use extra spaces after //
as a means of indenting text for clarity.
This commit consolidates a number of check_esp_err functions that check
whether an ESP-IDF return code is OK and raises an exception if not. The
exception raised is an OSError with the error code as the first argument
(negative if it's ESP-IDF specific) and the ESP-IDF error string as the
second argument.
This commit also fixes esp32.Partition.set_boot to use check_esp_err, and
uses that function for a unit test.
This commit adds an idf_heap_info(capabilities) method to the esp32 module
which returns info about the ESP-IDF heaps. It's useful to get a bit of a
picture of what's going on when code fails because ESP-IDF can't allocate
memory anymore. Includes documentation and a test.
This is to make the Travis CI size check more robust, by not relying on the
saved firmware from a previous build (which may use a different compiler,
environment, etc) but rather compile both master and the PR and diff them.
This size check now checks both bare-arm and minimal x86-32 builds (before
it just checked minimal Cortex-M build).
Error string compression is not deterministic in certain cases: it depends
on the Python version (whether dicts are ordered by default or not) and
probably also the order files are passed to this script, leading to a
difference in which words are included in the top 128 most common.
The changes in this commit use OrderedDict to keep parsed lines in a known
order, and, when computing how many bytes are saved by a given word, it
uses the word itself to break ties (which would otherwise be "random").
In mboot, the ability to override the USB vendor/product id's was added
back in 5688c9ba09. However, when the main
firmware is turned into a DFU file the default VID/PID are used there.
pydfu.py doesn't care about this but dfu-util does and prevents its use
when the VID/PID don't match.
This commit exposes BOOTLOADER_DFU_USB_VID/PID as make variables, for use
on either command line or mpconfigboard.mk, to set VID/PID in both mboot
and DFU files.
Add -Wdouble-promotion and -Wfloat-conversion for most ports to ban out
implicit floating point conversions, and add extra Travis builds using
MICROPY_FLOAT_IMPL_FLOAT to uncover warnings which weren't found
previously. For the unix port -Wsign-comparison is added as well but only
there since only clang supports this but gcc doesn't.
For combinations of certain versions of glibc and gcc the definition of
fpclassify always takes float as argument instead of adapting itself to
float/double/long double as required by the C99 standard. At the time of
writing this happens for instance for glibc 2.27 with gcc 7.5.0 when
compiled with -Os and glibc 3.0.7 with gcc 9.3.0. When calling fpclassify
with double as argument, as in objint.c, this results in an implicit
narrowing conversion which is not really correct plus results in a warning
when compiled with -Wfloat-conversion. So fix this by spelling out the
logic manually.
When the unix and windows ports use MICROPY_FLOAT_IMPL_FLOAT instead of
MICROPY_FLOAT_IMPL_DOUBLE, the test output has for example
complex(-0.15052, 0.34109) instead of the expected
complex(-0.15051, 0.34109).
Use one decimal place less for the output printing to fix this.
Initially some of these were found building the unix coverage variant on
MacOS because that build uses clang and has -Wdouble-promotion enabled, and
clang performs more vigorous promotion checks than gcc. Additionally the
codebase has been compiled with clang and msvc (the latter with warning
level 3), and with MICROPY_FLOAT_IMPL_FLOAT to find the rest of the
conversions.
Fixes are implemented either as explicit casts, or by using the correct
type, or by using one of the utility functions to handle floating point
casting; these have been moved from nativeglue.c to the public API.
For jobs which run tests multiple times terminate after the first run fails
otherwise the next test run overwrites the previous results, making
--print-failures useless.
Looking at the recent build history the time it takes just to complete the
OSX build is already 12 minutes so make it start early, which brings down
the total build time from about 20 minutes to 14 minutes.
Change mp_uint_t to size_t to match the mp_print_strn_t function prototype.
This fixes a compiler warning when mp_uint_t and size_t are not the same
size.
This commit provides a typedef for mp_rom_error_text_t, and a macro define
for MP_COMPRESSED_ROM_TEXT, when MICROPY_ROM_TEXT_COMPRESSION is disabled.
This simplifies the configuration (it no longer has a special case for
MICROPY_ENABLE_DYNRUNTIME) and makes it work for other cases that don't use
compression (eg examples/embedding). This commit also ensures
MICROPY_ROM_TEXT_COMPRESSION is defined during qstr processing.
Now that error string compression is supported it's more important to have
consistent error string formatting (eg all lowercase English words,
consistent contractions). This commit cleans up some of the strings to
make them more consistent.
This commit adds Loop.new_event_loop() which is used to reset the singleton
event loop. This functionality is put here instead of in Loop.close() to
make it possible to write code that is compatible with CPython.
Because the atomic section starts after checking whether the scheduler
state is pending, it's possible it can become a different state by the time
the atomic section starts.
This is especially likely on ports where MICROPY_BEGIN_ATOMIC_SECTION is
implemented with a mutex (i.e. it might block), but the race exists
regardless, i.e. if a context switch occurs between those two lines.
This macro is used to implement global serialisation, typically by
disabling IRQs. On the unix port, if threading is enabled, use the
existing thread mutex (that protects the thread list structure) for this
purpose. Other places in the code (eg the scheduler) assume this macro
will provide serialisation.
Based on eg 1e6fd9f2b4, it's understood that
the intention for unix builds is that regular builds disable assert, but
the coverage build should set -O0 and enable asserts.
It looks like this didn't work (even before variants were introduced, eg at
v1.11) -- coverage always built with -Os and -DNDEBUG.
This commit makes it possible for variants to have finer-grained control
over COPT flags, and enables assert() and -O0 on coverage builds.
Other variants already match the defaults so they have been updated.
TimeoutError was added back in 077812b2ab for
the cc3200 port. In f522849a4d the cc3200
port enabled use of it in the socket module aliased to socket.timeout. So
it was never added to the builtins. Then it was replaced by
OSError(ETIMEDOUT) in 047af9b10b.
The esp32 port enables this exception, since the very beginning of that
port, but it could never be accessed because it's not in builtins.
It's being removed: 1) to not encourage its use; 2) because there are a lot
of other OSError subclasses which are not defined at all, and having
TimeoutError is a bit inconsistent.
Note that ports can add anything to the builtins via MICROPY_PORT_BUILTINS.
And they can also define their own exceptions using the
MP_DEFINE_EXCEPTION() macro.
In this part of the code there is no way to get the ** operator, so no need
to check for it.
This commit also adds tests for this, and other related, invalid const
operations.
Recent builds are failing with the following error:
Error: pkg-config 0.29.2 is already installed
Assuming this will be the case form now on, we don't have to install
pkgconfig anymore.
The latest version of BTstack has a bug fixed so that it correctly
configures scan parameters if they are set right after activating the
stack. This means that BLE.gap_scan() will correctly set the scanning to
passive and so SCAN_RSP events are not passed through, so we don't need to
explicitly filter them in our bindings.
Making it more specific to use 0x02 for display with an aspect ratio > 2
(resolutions 96x16 and 128x32) and 0x12 for all other sizes as recommended
by @mcauser. Tested with a 64x32 display which did not work before.
The decompression of error-strings is only done if the string is accessed
via printing or via er.args. Tests are added for this feature to ensure
the decompression works.
The idea here is that there's a moderate amount of ROM used up by exception
text. Obviously we try to keep the messages short, and the code can enable
terse errors, but it still adds up. Listed below is the total string data
size for various ports:
bare-arm 2860
minimal 2876
stm32 8926 (PYBV11)
cc3200 3751
esp32 5721
This commit implements compression of these strings. It takes advantage of
the fact that these strings are all 7-bit ascii and extracts the top 128
frequently used words from the messages and stores them packed (dropping
their null-terminator), then uses (0x80 | index) inside strings to refer to
these common words. Spaces are automatically added around words, saving
more bytes. This happens transparently in the build process, mirroring the
steps that are used to generate the QSTR data. The MP_COMPRESSED_ROM_TEXT
macro wraps any literal string that should compressed, and it's
automatically decompressed in mp_decompress_rom_string.
There are many schemes that could be used for the compression, and some are
included in py/makecompresseddata.py for reference (space, Huffman, ngram,
common word). Results showed that the common-word compression gets better
results. This is before counting the increased cost of the Huffman
decoder. This might be slightly counter-intuitive, but this data is
extremely repetitive at a word-level, and the byte-level entropy coder
can't quite exploit that as efficiently. Ideally one would combine both
approaches, but for now the common-word approach is the one that is used.
For additional comparison, the size of the raw data compressed with gzip
and zlib is calculated, as a sort of proxy for a lower entropy bound. With
this scheme we come within 15% on stm32, and 30% on bare-arm (i.e. we use
x% more bytes than the data compressed with gzip -- not counting the code
overhead of a decoder, and how this would be hypothetically implemented).
The feature is disabled by default and can be enabled by setting
MICROPY_ROM_TEXT_COMPRESSION at the Makefile-level.
This commit makes all functions and function wrappers in modubinascii.c
STATIC and conditional on the MICROPY_PY_UBINASCII setting, which will
exclude the file from qstr/ compressed-string searching when ubinascii is
not enabled. The now-unused modubinascii.h header file is also removed.
The cc3200 port is updated accordingly to use this module in its entirety
instead of providing its own top-level definition of ubinascii.
This was originally like this because the cc3200 port has its own ubinascii
module which referenced these methods. The plan appeared to be that the
API might diverge (e.g. hardware crc), but this should be done similar to
I2C/SPI via a port-specific handler, rather than the port having its own
definition of the module. Having a centralised module definition also
enforces consistency of the API among ports.
Instead of compiler-level if-logic. This is necessary to know what error
strings are included in the build at the preprocessor stage, so that string
compression can be implemented.
This commit changes the default filesystem type for esp32 to littlefs v2.
This port already enables both VfsFat and VfsLfs2, so either can be used
for the filesystem, and existing systems that use FAT will still work.
This commit changes the esp8266 boards to use littlefs v2 as the
filesystem, rather than FAT. Since the esp8266 doesn't expose the
filesystem to the PC over USB there's no strong reason to keep it as FAT.
Littlefs is smaller in code size, is more efficient in use of flash to
store data, is resilient over power failure, and using it saves about 4k of
heap RAM, which can now be used for other things.
This is a backwards incompatible change because all existing esp8266 boards
will need to update their filesystem after installing new firmware (eg
backup old files, install firmware, restore files to new filesystem).
As part of this commit the memory layout of the default board (GENERIC) has
changed. It now allocates all 1M of memory-mapped flash to the firmware,
so the filesystem area starts at the 2M point. This is done to allow more
frozen bytecode to be stored in the 1M of memory-mapped flash. This
requires an esp8266 module with 2M or more of flash to work, so a new board
called GENERIC_1M is added which has the old memory-mapping (but still
changed to use littlefs for the filesystem).
In summary there are now 3 esp8266 board definitions:
- GENERIC_512K: for 512k modules, doesn't have a filesystem.
- GENERIC_1M: for 1M modules, 572k for firmware+frozen code, 396k for
filesystem (littlefs).
- GENERIC: for 2M (or greater) modules, 968k for firmware+frozen code,
1M+ for filesystem (littlefs), FAT driver also included in firmware for
use on, eg, external SD cards.
This commit adds support for global exception handling in uasyncio
according to the CPython error handling:
https://docs.python.org/3/library/asyncio-eventloop.html#error-handling-api
This allows a program to receive exceptions from detached tasks and log
them to an appropriate location, instead of them being printed to the REPL.
The implementation preallocates a context dictionary so in case of an
exception there shouldn't be any RAM allocation.
The approach here is compatible with CPython except that in CPython the
exception handler is called once the task that threw an uncaught exception
is freed, whereas in MicroPython the exception handler is called
immediately when the exception is thrown.
Following up to 5e6cee07ab, some systems (eg
FreeBSD 12.0 64-bit) will crash if the stack-overflow margin is too small.
It seems the margin of 8192 bytes (or thereabouts) is always needed. This
commit adds this much margin if the requested stack size is too small.
Fixes issue #5824.
This adds a couple of commands to the run-tests script to print the diffs
of failed tests and also to clean up the .exp and .out files after failed
tests. (And a spelling error is fixed while we are touching nearby code.)
Travis is also updated to use these new commands, including using it for
more builds.
Since automatically formatting tests with black, we have lost one line of
code coverage. This adds an explicit test to ensure we are testing the
case that is no longer covered implicitly.
This adds the Python files in the tests/ directory to be formatted with
./tools/codeformat.py. The basics/ subdirectory is excluded for now so we
aren't changing too much at once.
In a few places `# fmt: off`/`# fmt: on` was used where the code had
special formatting for readability or where the test was actually testing
the specific formatting.
These were found by buiding the unix coverage variant on macOS (so clang
compiler). Mostly, these are fixing implicit cast of float/double to
mp_float_t which is one of those two and one mp_int_t to size_t fix for
good measure.
These are mainly used by the previous version of uasyncio which is now
replaced by a newer version, with built-in C module _uasyncio. Saves about
1300 bytes of flash.
https://www.python.org/dev/peps/pep-0475/
This implements something similar to PEP 475 on the unix port, and for the
VfsPosix class.
There are a few differences from the CPython implementation:
- Since we call mp_handle_pending() between any ENITR's, additional
functions could be called if MICROPY_ENABLE_SCHEDULER is enabled, not
just signal handlers.
- CPython only handles signal on the main thread, so other threads will
raise InterruptedError instead of retrying. On MicroPython,
mp_handle_pending() will currently raise exceptions on any thread.
A new macro MP_HAL_RETRY_SYSCALL is introduced to reduce duplicated code
and ensure that all instances behave the same. This will also allow other
ports that use POSIX-like system calls (and use, eg, VfsPosix) to provide
their own implementation if needed.
The stack size adjustment for detecting stack overflow in threads was not
taking into account that the requested stack size could be <= 8k, in which
case the subtraction would overflow. This is fixed in this commit by
ensuring that the adjustment can't be more than the available size.
This fixes the test tests/thread/thread_stacksize1.py which sometimes
crashes with a segmentation fault because of an uncaught NLR jump, which is
a "maximum recursion depth exceeded" exception.
Suggested-by: @dpgeorge
Implements Task and TaskQueue classes in C, using a pairing-heap data
structure. Using this reduces RAM use of each Task, and improves overall
performance of the uasyncio scheduler.
Includes a test where the (non uasyncio) client does a RST on the
connection, as a simple TCP server/client test where both sides are using
uasyncio, and a test for TCP stream close then write.
This commit adds a completely new implementation of the uasyncio module.
The aim of this version (compared to the original one in micropython-lib)
is to be more compatible with CPython's asyncio module, so that one can
more easily write code that runs under both MicroPython and CPython (and
reuse CPython asyncio libraries, follow CPython asyncio tutorials, etc).
Async code is not easy to write and any knowledge users already have from
CPython asyncio should transfer to uasyncio without effort, and vice versa.
The implementation here attempts to provide good compatibility with
CPython's asyncio while still being "micro" enough to run where MicroPython
runs. This follows the general philosophy of MicroPython itself, to make it
feel like Python.
The main change is to use a Task object for each coroutine. This allows
more flexibility to queue tasks in various places, eg the main run loop,
tasks waiting on events, locks or other tasks. It no longer requires
pre-allocating a fixed queue size for the main run loop.
A pairing heap is used to queue Tasks.
It's currently implemented in pure Python, separated into components with
lazy importing for optional components. In the future parts of this
implementation can be moved to C to improve speed and reduce memory usage.
But the aim is to maintain a pure-Python version as a reference version.
To enable lazy loading of submodules (among other things), which is very
useful for MicroPython libraries that want to have optional subcomponents.
Disabled explicitly on minimal ports.
This function is not used by the core but having it as part of the build
allows it to be used by user C modules, or board extensions. The linker
won't include it in the final firmware if it remains unused.
On CPython, and with pylint, the variables MATCH_ROM and SEARCH_ROM are
undefined. This code works in MicroPython because these variables are
constants and the MicroPython parser/compiler optimises them out. But it
is not valid Python because they are technically undefined within the scope
they are used.
This commit makes the code valid Python code. The const part is removed
completely because these constants are part of the public API and so cannot
be moved to the global scope (where they could still use the MicroPython
const optimisation).
This removes the port-specific definition of MP_PLAT_PRINT_STRN on the
windows port, so that the default mp_hal_stdout_tx_strn_cooked() is always
used. This fixes releasing the GIL during the call to write() (this was
missed in bc3499f010).
Also, mp_hal_dupterm_tx_strn() was defined but never used anywhere so it is
safe to delete it.
This removes the port-specific definition of MP_PLAT_PRINT_STRN on the unix
port. Since fee7e5617f this is no longer a
single function call so we are not really optimising anything over using
the default definition of MP_PLAT_PRINT_STRN which calls
mp_hal_stdout_tx_strn_cooked().
Zephyr v2.2 reworked its gpio api to support linux device tree bindings and
pin logical levels. This commit updates the zephyr port's machine.Pin
class to replace the deprecated gpio api calls with the new supported gpio
api. This resolves several build warnings.
Tested on frdm_k64f and mimxrt1050_evk boards.
The "led" argument is always a pointer to the GPIO port, or'd with the pin
that the LED is on, so testing that it is "1" is unnecessary. The type of
"led" is also changed to uint32_t so it can properly hold a 32-bit pointer.
Updating the LED0 state from systick handler ensures LED0 is always
consistent with its flash rate regardless of other processing going on in
either interrupts or main. This improves the visible stability of the
bootloader, rather than LED0 flashing somewhat randomly at times.
This commit also changes the LED0 flash rate depending on the current state
of DFU, giving slightly more visual feedback on what the device is doing.
Also support MP_STREAM_GET_FILENO ioctl. The stdio flush change was done
previously for the unix port in 3e0b46b9af.
These changes make this POSIX file implementation equivalent to the unix
file implementation.
Fixes UDP non-blocking recv so it returns EAGAIN instead of ETIMEDOUT.
Timeout waiting for incoming data is also improved by replacing 100ms delay
with poll_sockets(), as is done in other parts of this module.
Fixes issue #5759.
Adds support in the zephyr port to execute main.py if the file system is
enabled and the file exists. Existing support for executing a main.py
frozen module is preserved, since pyexec_file_if_exists() works just
like pyexec_frozen_module() if there's no vfs.
Enables the zephyr usb device stack and mass storage class on the
mimxrt1050_evk board. The mass storage class is backed by the sdhc disk
access driver, so it's now possible to browse and modify the contents of
the SD card from a USB host (your PC). This is in preparation to support
writing a main.py script to the SD card, and then executing it after the
next reset.
Adds support in the zephyr port to mount a file system if a block device
(sdhc disk access or flash area) is available. The mount point is either
"/sd" or "/flash" depending on the type of block device.
Tested with an sdhc disk access block device and fatfs on the
mimxrt1050_evk board.
Tested with a flash area block device and littlefs on the reel_board.
This commit adds micropython.heap_locked() which returns the current
lock-depth of the heap, and can be used by Python code to check if the heap
is locked or not. This new function is configured via
MICROPY_PY_MICROPYTHON_HEAP_LOCKED and is disabled by default.
This commit also changes the return value of micropython.heap_unlock() so
it returns the current lock-depth as well.
This is an extremely minimal port to the NXP i.MX RT, in the style of the
SAMD port It's largely based on the TinyUSB mimxrt implementation, using
the NXP SDK. It currently supports the Teensy 4.0 board with a REPL over
the USB-VCP interface.
This commit also adds the NXP SDK submodule (also from TinyUSB) to
lib/nxp_driver.
Note: if you already have the tinyusb submodule initialized recursively you
will need to run the following as the tinyusb sub-submodules have been
rearranged (upstream):
git submodule deinit lib/tinyusb
rm -rf .git/modules/lib/tinyusb
git submodule update --init lib/tinyusb
This eliminates the need for the sizeof regex fixup by rearranging things a
bit. All other bitfields already use the parentheses around expressions
with sizeof, so one case is fixed by following this convention.
VM_MAX_STATE_ON_STACK is the only remaining problem and it can be worked
around by changing the order of the operands.
The double-% was added in 11de8399fe (Jun
2014) when such errors were formatted with printf. But then
55830dd9bf (Dec 2018) changed
mp_obj_new_exception_msg() to not format the message, as discussed
in #3004. So such error strings are no longer formatted and a % is just
that.
This commit changes the BLE _IRQ_SCAN_RESULT data from:
addr_type, addr, connectable, rssi, adv_data
to:
addr_type, addr, adv_type, rssi, adv_data
This allows _IRQ_SCAN_RESULT to handle all scan result types (not just
connectable and non-connectable passive scans), and to distinguish between
them using adv_type which is an integer taking values 0x00-0x04 per the BT
specification.
This is a breaking change to the API, albeit a very minor one: the existing
connectable value was a boolean and True now becomes 0x00, False becomes
0x02.
Documentation is updated and a test added.
Fixes#5738.
This commit ensures that the BLE stack is active before allowing operations
that may otherwise crash if it's not active. It also clarifies the state
better (adding the "stopping" state) and renames mp_bluetooth_is_enabled to
the more self-explanatory mp_bluetooth_is_active.
This commit adds a test runner and initial test scripts which run multiple
Python/MicroPython instances (eg executables, target boards) in parallel.
This is useful for testing, eg, network and Bluetooth functionality.
Each test file has a set of functions called instanceX(), where X ranges
from 0 up to the maximum number of instances that are needed, N-1. Then
run-multitests.py will execute this script on N separate instances (eg
micropython executables, or attached boards via pyboard.py) at the same
time, synchronising their start in the right order, possibly passing IP
address (or other address like bluetooth MAC) from the "server" instance to
the "client" instances so they can connect to each other. It then runs
them to completion, collects the output, and then tests against what
CPython gives (or what's in a provided .py.exp file).
The tests will be run using the standard unix executable for all instances
by default, eg:
$ ./run-multitests.py multi_net/*.py
Or they can be run with a board and unix executable via:
$ ./run-multitests.py --instance pyb:/dev/ttyACM0 --instance exec:micropython multi_net/*.py
This makes a cleaner separation between the: driver, HCI UART and BT stack.
Also updated the naming to be more consistent (mp_bluetooth_hci_*).
Work done in collaboration with Jim Mussared aka @jimmo.
Move extmod/modbluetooth_nimble.* to extmod/nimble. And move common
Makefile lines to extmod/nimble/nimble.mk (which was previously only used
by stm32). This allows (upcoming) btstack to follow a similar structure.
Work done in collaboration with Jim Mussared aka @jimmo.
When using a manifest on Windows the reference to mpy-cross compiler was
missing the .exe file extension, so add it when appropriate.
Also allow the default path to mpy-cross to be overridden by the (optional)
MICROPY_MPYCROSS environment variable, to allow full flexibility on any OS.
sys.stdout.flush() is needed on CPython to flush the output, and the change
in this commit makes such an expression also work on MicroPython (although
MicroPython doesn't actual need to do any flushing).
The CI job will fail if there is any code which does not conform to the
style encoded by tools/codeformat.py. And it will list any changes
required.
uncrustify is built from source because Ubuntu bionic has uncrustify-0.66.1
which is from 2017/11/22 and is missing many options needed here.
This string is recognised by uncrustify, to disable formatting in the
region marked by these comments. This is necessary in the qstrdef*.h files
to prevent modification of the strings within the Q(...). In other places
it is used to prevent excessive reformatting that would make the code less
readable.
This commit adds a tool, codeformat.py, which will reformat C and Python
code to fit a certain style. By default the tool will reformat (almost)
all the original (ie not 3rd-party) .c, .h and .py files in this
repository. Passing filenames on the command-line to codeformat.py will
reformat only those. Reformatting is done in-place.
uncrustify is used for C reformatting, which is available for many
platforms and can be easily built from source, see
https://github.com/uncrustify/uncrustify. The configuration for uncrustify
is also added in this commit and values are chosen to best match the
existing code style. A small post-processing stage on .c and .h files is
done by codeformat.py (after running uncrustify) to fix up some minor
items:
- space inserted after * when used as multiplication with sizeof
- #if/ifdef/ifndef/elif/else/endif are dedented by one level when they are
configuring if-blocks and case-blocks.
For Python code, the formatter used is black, which can be pip-installed;
see https://github.com/psf/black. The defaults are used, except for line-
length which is set at 99 characters to match the "about 100" line-length
limit used in C code.
The formatting tools used and their configuration were chosen to strike a
balance between keeping existing style and not changing too many lines of
code, and enforcing a relatively strict style (especially for Python code).
This should help to keep the code consistent across everything, and reduce
cognitive load when writing new code to match the style.
And rename it to mp_obj_cast_to_native_base() to indicate this. This
allows users of this function to easily support native and native-subclass
objects in the same way (by just passing the object through this function).
Since commit 3aab54bf43 this piece of code is
no longer needed because the top-level function mp_obj_equal_not_equal()
now handles the case of user types, and will never call tuple's binary_op
function with MP_BINARY_OP_EQUAL and a non-tuple on the RHS.
Only the "==" operator was tested by the test suite in for such arguments.
Other comparison operators like "<" take a different path in the code so
need to be tested separately.
If the built-in input() is enabled (which it is by default) then it needs
some form of readline, so supply it with one when MICROPY_USE_READLINE=0.
Fixes issue #5658.
Follow up to recent commit ad7213d3c3, the
name "varg2" is misleading, vlist describes better that the argument is a
va_list. This name also matches CircuitPython, which already has such
helper functions.
This changes the signal used to trigger garbage collection from SIGUSR1 to
SIGRTMIN + 5. SIGUSR1 is quite common compared to SIGRTMIN (measured by
google search results) and is more likely to conflict with libraries that
may use the same signal.
POSIX specifies that there are at least 8 real-time signal so 5 was chosen
as a "random" number to further avoid potential conflict with libraries
that may use SIGRTMIN or SIGRTMAX.
Also, if we ever have a `usignal` module, it would be nice to leave SIGUSR1
and SIGUSR2 free for user programs.
The "random" module no longer uses the hardware RNG (the extmod version of
this module has a pseudo-random number generator), so the config option
MICROPY_PY_RANDOM_HW_RNG is no longer meaningful. This commit replaces it
with MICROPY_HW_ENABLE_RNG, which controls whether the hardware RNG is
included in the build.
The install target is current broken when PROG is used to override the
default executable name. This fixes it by removing the redundant TARGET
variable and uses PROG directly instead.
The install and uninstall targets are also moved to the common unix
Makefile so that all variants can be installed in the same way.
Currently it is not possible to override PREFIX when installing micropython
using the makefile. It is common practice to be able to run something like
this:
$ make install PREFIX=/usr DESTDIR=/tmp/staging
This fixes such usage.
These addresses were initially chosen to match the nRF24 Arduino library
examples but they are byte-reversed. So change them to be on-air
compatible with the Arduino library.
Also, the data sheet for the nRF24 says that RX data pipes 1-5 must share
the same top 32-bits, and must differ only in the LSbyte. The addresses
used here (while correct because they are on TX pipe and RX pipe 0) are
misleading in this sense, because it looks like they were chosen to share
the top 32-bits per the datasheet.
This provides a more consistent C-level API to raise exceptions, ie moving
away from nlr_raise towards mp_raise_XXX. It also reduces code size by a
small amount on some ports.
The default value for MICROPYPATH used in unix/main.c is
"~/.micropython/lib:/usr/lib/micropython" which has 2 problems when used in
the Windows port:
- it has a ':' as path separator but the port uses ';' so the entire string
is effectively discarded since it gets interpreted as a single path which
doesn't exist
- /usr/lib/micropython is not a valid path in a standard Windows
environment
Override the value with a suitable default.
If the exception doesn't need printf-style formatting then calling
mp_raise_msg is more efficient. Also shorten exception messages to match
style in core and other ports.
Both bool and namedtuple will check against other types for equality; int,
float and complex for bool, and tuple for namedtuple. So to make them work
after the recent commit 3aab54bf43 they would
need MP_TYPE_FLAG_NEEDS_FULL_EQ_TEST set. But that makes all bool and
namedtuple equality checks less efficient because mp_obj_equal_not_equal()
could no longer short-cut x==x, and would need to try __ne__. To improve
this, this commit splits the MP_TYPE_FLAG_NEEDS_FULL_EQ_TEST flags into 3
separate flags to give types more fine-grained control over how their
equality behaves. These new flags are then used to fix bool and namedtuple
equality.
Fixes issue #5615 and #5620.
This fix can be demonstrated by the following:
b = bytearray(32)
f = framebuf.FrameBuffer(b, 32, 8, framebuf.MONO_HLSB)
f.pixel(0, 0, 1)
print('MONO_HLSB', hex(b[0]))
b = bytearray(32)
f = framebuf.FrameBuffer(b, 32, 8, framebuf.MONO_HMSB)
f.pixel(0, 0, 1)
print('MONO_HMSB', hex(b[0]))
Outcome:
MONO_HLSB 0x80
MONO_HMSB 0x1
It's not needed. The C integer implicit promotion rules mean that the
uint8_t of the incoming character is promoted to a (signed) int, matching
the type of interrupt_char. Thus the uint8_t incoming character can never
be equal to -1 (the value of interrupt_char that indicate that interruption
is disabled).
The mp_keyboard_interrupt() function does exactly what is needed here, and
using it gets ctrl-C working when MICROPY_ENABLE_SCHEDULER is enabled on
these ports (and MICROPY_ASYNC_KBD_INTR is disabled).
This is a more logical place to clear the KeyboardInterrupt traceback,
right before it is set as a pending exception. The clearing is also
optimised from a function call to a simple store of NULL.
It was originally in IRAM due to the linker script specification, but
since the function moved from lib/utils/interrupt_char.c to py/scheduler.c
it needs to be put back in IRAM.
Functions like mp_keyboard_interrupt() may need to be called from an IRQ
handler and may need to be in a special memory section, so provide a
generic wrapping macro for a port to do this. The macro name is chosen to
be MICROPY_WRAP_<function name in uppercase> so that (in the future with
more wrappers) each function could potentially be handled separately.
This function is tightly coupled to the state and behaviour of the
scheduler, and is a core part of the runtime: to schedule a pending
exception. So move it there.
Pending exceptions would otherwise be handled later on where there may not
be an NLR handler in place.
A similar fix is also made to the unix port's REPL handler.
Fixes issues #4921 and #5488.
Previous behaviour is when this argument is set to "true", in which case
the function will raise any pending exception. Setting it to "false" will
cancel any pending exception.
Enables the littlefs (v1 and v2) filesystems in the zephyr port.
Example usage with the internal flash on the reel_board or the
rv32m1_vega_ri5cy board:
import os
from zephyr import FlashArea
bdev = FlashArea(FlashArea.STORAGE, 4096)
os.VfsLfs2.mkfs(bdev)
os.mount(bdev, '/flash')
with open('/flash/hello.txt','w') as f:
f.write('Hello world')
print(open('/flash/hello.txt').read())
Things get a little trickier with the frdm_k64f due to the micropython
application spilling into the default flash storage partition defined
for this board. The zephyr build system doesn't enforce the flash
partitioning when mcuboot is not enabled (which it is not for
micropython). For now we can demonstrate that the littlefs filesystem
works on frdm_k64f by constructing the FlashArea block device on the
mcuboot scratch partition instead of the storage partition. Do this by
replacing the FlashArea.STORAGE constant above with the value 4.
Introduces a new zephyr.FlashArea class that uses the zephyr flash map
api to implement the uos.AbstractBlockDev protocol. The flash driver is
enabled on the frdm_k64f board, reel_board, and rv32m1_vega_ri5cy board.
The standard and extended block device protocols are both supported,
therefore this class can be used with file systems like littlefs which
require the extended interface.
Enables the fatfs filesystem in the zephyr port.
Example usage with an SD card on the mimxrt1050_evk board:
import zephyr, os
bdev = zephyr.DiskAccess('SDHC')
os.VfsFat.mkfs(bdev)
os.mount(bdev, '/sd')
with open('/sd/hello.txt','w') as f:
f.write('Hello world')
print(open('/sd/hello.txt').read())
Introduces a new zephyr.DiskAccess class that uses the zephyr disk
access api to implement the uos.AbstractBlockDev protocol. This can be
used with any type of zephyr disk access driver, which currently
includes SDHC, RAM, and FLASH implementations. The SDHC driver is
enabled on the mimxrt1050_evk board.
Only the standard block device protocol (without the offset parameter)
can be supported with the zephyr disk access api, therefore this class
cannot be used with file systems like littlefs which require the
extended interface. In the future it may be possible to implement the
extended interface in a new class using the zephyr flash api.
A 'return' statement on module/class level is not correct Python, but
nothing terribly bad happens when it's allowed. So remove the check unless
MICROPY_CPYTHON_COMPAT is on.
This is similar to MicroPython's treatment of 'import *' in functions
(except 'return' has unsurprising behavior if it's allowed).
By simply reordering the enums for pyexec_mode_kind_t it eliminates a data
variable which costs ROM to initialise it. And the minimal build now has
nothing in the data section.
It seems the compiler is smart enough so that the generated code for
if-logic which tests these enum values is unchanged.
When this variable is set to non-empty string it triggers the REPL after a
command/module/file finishes running.
The Python file without the file extension is because the cmdline: parser
in run-test splits on spaces, so we can't use the -c option since
`import os` can't be written without a space.
CPython also has os.environ, which should be used instead of os.getenv()
due to caching in the os.environ mapping. But for MicroPython it makes
sense to only implement the basic underlying methods, ie getenv/putenv/
unsetenv.
This adds a -h option to print the usage help text and adds a new, shorter
error message that is printed when invalid arguments are given. This
behaviour follows CPython (and other tools) more closely.
This commit modifies the usage() function to only print the -v option help
text when MICROPY_DEBUG_PRINTERS is enabled. The -v option requires this
build option to be enabled for it to have any effect.
The usage text is also modified to show the -i and -m options, and also
show that running a command, module or file are mutually exclusive.
This adds support for a MICROPYINSPECT environment variable that works
exactly like PYTHONINSPECT; per CPython docs:
If this is set to a non-empty string it is equivalent to specifying the
-i option.
This variable can also be modified by Python code using os.environ to
force inspect mode on program termination.
Zephyr removed the build target syscall_macros_h_target in commit
f4adf107f31674eb20881531900ff092cc40c07f. Removes reference from
MicroPython to fix build errors in the zephyr port.
This change is not compatible with zephyr v2.1 or earlier. It will be
compatible with Zephyr v2.2 when released.
The SYS_CLOCK_HW_CYCLES_TO_NS macro was deprecated in zephyr commit
8892406c1de21bd5de5877f39099e3663a5f3af1. This commit updates MicroPython
to use the new k_cyc_to_ns_floor64 api and fix build warnings in the zephyr
port.
This change is compatible with Zephyr v2.1 and later.
Zephyr restructured its includes in v2.0 and removed compatibility shims
after two releases in commit 1342dadc365ee22199e51779185899ddf7478686.
Updates include paths in MicroPython accordingly to fix build errors in
the zephyr port.
These changes are compatible with Zephyr v2.0 and later.
Show how to send an HTTP response code and content-type. Without the
response code Safari/iOS will fail. Without the content-type Lynx/Links
will fail.
When stdout is redirected it is useful to have errors printed to stderr
instead of being redirected.
mp_stderr_print() can't be used in these two instances since the
MicroPython runtime is not running so we use fprintf(stderr) instead.
This option makes pyboard.py exit as soon as the script/command is
successfully sent to the device, ie it does not wait for any output. This
can help to avoid hangs if the board is being rebooted with --comman (for
example).
Example usage:
$ python3 ./tools/pyboard.py --device /dev/ttyUSB0 --no-follow \
--command 'import machine; machine.reset()'
The ability to change the host is a frequently requested feature, so
explicitly document how it can be achieved using the existing code.
See issues #2121, #4385, #4622, #5122, #5536.
This commit improves pllvalues.py to generate PLL values for H7 MCUs that
are valid (VCO in and out are in range) and extend for the entire range of
SYSCLK values up to 400MHz (up to 480MHz is currently unsupported).
This commit implements a more complete replication of CPython's behaviour
for equality and inequality testing of objects. This addresses the issues
discussed in #5382 and a few other inconsistencies. Improvements over the
old code include:
- Support for returning non-boolean results from comparisons (as used by
numpy and others).
- Support for non-reflexive equality tests.
- Preferential use of __ne__ methods and MP_BINARY_OP_NOT_EQUAL binary
operators for inequality tests, when available.
- Fallback to op2 == op1 or op2 != op1 when op1 does not implement the
(in)equality operators.
The scheme here makes use of a new flag, MP_TYPE_FLAG_NEEDS_FULL_EQ_TEST,
in the flags word of mp_obj_type_t to indicate if various shortcuts can or
cannot be used when performing equality and inequality tests. Currently
four built-in classes have the flag set: float and complex are
non-reflexive (since nan != nan) while bytearray and frozenszet instances
can equal other builtin class instances (bytes and set respectively). The
flag is also set for any new class defined by the user.
This commit also includes a more comprehensive set of tests for the
behaviour of (in)equality operators implemented in special methods.
This board now has the following 3 build configurations:
- mboot + external QSPI in XIP mode + internal filesystem
- mboot + external QSPI with filesystem (the default)
- no mboot + external QSPI with filesystem
With a SPI flash that has more than 16MB, 32-bit addressing is required
rather than the standard 24-bit. This commit adds support for 32-bit
addressing so that the SPI flash commands (read/write/erase) are selected
automatically depending on the size of the address being used at each
operation.
This modifies the signature of mp_thread_set_state() to use
mp_state_thread_t* instead of void*. This matches the return type of
mp_thread_get_state(), which returns the same value.
`struct _mp_state_thread_t;` had to be moved before
`#include <mpthreadport.h>` since the stm32 port uses it in its
mpthreadport.h file.
PLLM is shared among all PLL blocks on F7 MCUs, and this calculation to
configure PLLSAI to have 48MHz on the P output previously assumed that PLLM
is equal to HSE (eg PLLM=25 for HSE=25MHz). This commit relaxes this
assumption to allow other values of PLLM.
The loop searches backwards for a target, but doesn't stop after finding
the first result, meaning that it'll always end up at the outermost
exception handler.
This previously made the native emitter incompatible with the bytecode
emitter, and mp_resume (and subsequently mp_obj_generator_resume) expects
the bytecode emitter behavior (i.e. throw==NULL).
This commit adds a generator test for throwing into a nested exception, and
one when using yield-from with a pending exception cleanup. Both these
tests currently fail on the native emitter, and are simplified versions of
native test failures from uasyncio in #5332.
It is not safe to enable MICROPY_ASYNC_KBD_INTR and MICROPY_PY_THREAD_GIL
at the same time. This will trigger a compiler error to ensure that it
is not possible to make this mistake.
Addition of GIL EXIT/ENTER pairs are:
- modos: release the GIL during system calls. CPython does this as well.
- moduselect: release the GIL during the poll() syscall. This call can be
blocking, so it is important to allow other threads to run at this time.
- modusocket: release the GIL during system calls. Many of these calls can
be blocking, so it is important to allow other threads to run.
- unix_mphal: release the GIL during the read and write syscalls in
mp_hal_stdin_rx_chr and mp_hal_stdout_tx_strn. If we don't do this
threads are blocked when the REPL or the builtin input function are used.
- file, main, mpconfigport.h: release GIL during syscalls in built-in
functions that could block.
When CFLAGS_EXTRA/LDFLAGS_EXTRA (or anything) is set on the command line of
a make invocation then it will completely override any setting or appending
of these variables in the makefile(s). This means builds like the coverage
variant will have their mpconfigvariant.mk settings overridden. Fix this
by using CFLAGS/LDFLAGS exclusively in the makefile(s), reserving the
CFLAGS_EXTRA/LDFLAGS_EXTRA variables for external command-line use only.
Commit d96cfd13e3 introduced a regression in
testing for bool objects, that such objects were in some cases no longer
recognised and bools, eg when using mp_obj_is_type(o, &mp_type_bool), or
mp_obj_is_integer(o).
This commit fixes that problem by adding mp_obj_is_bool(o). Builds with
MICROPY_OBJ_IMMEDIATE_OBJS enabled check if the object is any of the const
True or False objects. Builds without it use the old method of ->type
checking, which compiles to smaller code (compared with the former
mentioned method).
Fixes#5538.
When threads and the GIL are enabled, then the qstr mutex is not needed.
The qstr_mutex field is never used in this case because of:
#if MICROPY_PY_THREAD && !MICROPY_PY_THREAD_GIL
#define QSTR_ENTER() mp_thread_mutex_lock(&MP_STATE_VM(qstr_mutex), 1)
#define QSTR_EXIT() mp_thread_mutex_unlock(&MP_STATE_VM(qstr_mutex))
#else
#define QSTR_ENTER()
#define QSTR_EXIT()
#endif
So, we can completely remove qstr_mutex everywhere when MICROPY_PY_THREAD
&& !MICROPY_PY_THREAD_GIL.
When threads and the GIL are enabled, then the GC mutex is not needed. The
gc_mutex field is never used in this case because of:
#if MICROPY_PY_THREAD && !MICROPY_PY_THREAD_GIL
#define GC_ENTER() mp_thread_mutex_lock(&MP_STATE_MEM(gc_mutex), 1)
#define GC_EXIT() mp_thread_mutex_unlock(&MP_STATE_MEM(gc_mutex))
#else
#define GC_ENTER()
#define GC_EXIT()
#endif
So, we can completely remove gc_mutex everywhere when MICROPY_PY_THREAD
&& !MICROPY_PY_THREAD_GIL.
Some parts of code have been aligned to increase readability. In general
'' instead of "" were used wherever possible to keep the same convention
for entire file. Import inspect line has been moved to the top according
to hints reported by pep8 tools. A few extra spaces were removed, a few
missing spaces were added. Comments have been updated, mostly in
"read_dfu_file" function. Some other comments have been capitalized and/or
slightly updated. A few docstrings were fixed as well. No real code
changes intended.
Translate common Ctrl-Left/Right/Delete/Backspace to the EMACS-style
sequences (i.e. Alt key based) for forward-word, backward-word, forwad-kill
and backward-kill. Requires MICROPY_REPL_EMACS_WORDS_MOVE to be defined so
the readline implementation interprets these.
Prior to this commit, if the flash filesystem was not formatted then it
would error: "AttributeError: 'FlashBdev' object has no attribute 'mount'".
That is due to it not being able to detect the filesystem on the block
device and just trying to mount the block device directly.
This commit fixes the issue by just catching all exceptions. Also it's not
needed to try the mount if `flashbdev.bdev` is None.
Can be used where mp_obj_int_get_checked() will overflow due to the
sign-bit solely. This returns an mp_uint_t, so it also verifies the given
integer is not negative.
Currently implemented only for mpz configurations.
This function is called often and with immediate objects enabled it has
more cases, so optimise it for speed. With this optimisation the runtime
is now slightly faster with immediate objects enabled than with them
disabled.
This option (enabled by default for object representation A, B, C) makes
None/False/True objects immediate objects, ie they are no longer a concrete
object in ROM but are rather just values, eg None=0x6 for representation A.
Doing this saves a considerable amount of code size, due to these objects
being widely used:
bare-arm: -392 -0.591%
minimal x86: -252 -0.170% [incl +52(data)]
unix x64: -624 -0.125% [incl -128(data)]
unix nanbox: +0 +0.000%
stm32: -1940 -0.510% PYBV10
cc3200: -1216 -0.659%
esp8266: -404 -0.062% GENERIC
esp32: -732 -0.064% GENERIC[incl +48(data)]
nrf: -988 -0.675% pca10040
samd: -564 -0.556% ADAFRUIT_ITSYBITSY_M4_EXPRESS
Thanks go to @Jongy aka Yonatan Goldschmidt for the idea.
This commit adjusts the definition of qstr encoding in all object
representations by taking a single bit from the qstr space and using it to
distinguish between qstrs and a new kind of literal object: immediate
objects. In other words, the qstr space is divided in two pieces, one half
for qstrs and the other half for immediate objects.
There is still enough room for qstr values (29 bits in representation A on
a 32-bit architecture, and 19 bits in representation C) and the new
immediate objects can be used for things like None, False and True.
This moves the MICROPY_PORT_INIT_FUNC hook to the end of mp_init(), just
before MP_THREAD_GIL_ENTER(), so that everything (in particular the GIL
mutex) is intialized before the hook is called. MICROPY_PORT_DEINIT_FUNC
is also moved to be symmetric (but there is no functional change there).
If a port needs to perform initialisation earlier than
MICROPY_PORT_INIT_FUNC then it can do it before calling mp_init().
This commit adds backward-word, backward-kill-word, forward-word,
forward-kill-word sequences for the REPL, with bindings to Alt+F, Alt+B,
Alt+D and Alt+Backspace respectively. It is disabled by default and can be
enabled via MICROPY_REPL_EMACS_WORDS_MOVE.
Further enabling MICROPY_REPL_EMACS_EXTRA_WORDS_MOVE adds extra bindings
for these new sequences: Ctrl+Right, Ctrl+Left and Ctrl+W.
The features are enabled on unix micropython-coverage and micropython-dev.
During readline development, this function may receive bad `pos` values.
It's easier to understand the assert() failing error than to have a "stack
smashing detected" message.
This adds a short paragraph on how to hook readthedocs.org up. The main
goal is to make people aware of the option, to help with contributing to
the documentation.
Invoking "make" will still build the standard "micropython" executable, but
other variants are now build using, eg, "make VARIANT=minimal". This
follows how bare-metal ports specify a particular board, and allows running
any make target (eg clean, test) with any variant.
Convenience targets (eg "make coverage") are provided to retain the old
behaviour, at least for now.
See issue #3043.
Most types are in rodata/ROM, and mp_obj_base_t.type is a constant pointer,
so enforce this const-ness throughout the code base. If a type ever needs
to be modified (eg a user type) then a simple cast can be used.
This change has the following effects:
- Reduces the resolution of the RTC sub-second counter from 30.52us to
122.07us.
- Allows RTC.calibration() to now support positive values (as well as
negative values).
- Reduces VBAT current consumption in standby mode by a small amount.
For general purpose use 122us resolution of the sub-second counter is
good enough, and the benefits of full range calibration and minor reduction
in VBAT consumption are worth the change.
As the mktime documentation for CPython states: "The earliest date for
which it can generate a time is platform-dependent". In particular on
Windows this depends on the timezone so e.g. for UTC+2 the earliest is 2
hours past midnight January 1970. So change the reference to the earliest
possible, for UTC+14.
Make version 4.1 and lower does not allow $call as the main expression on a
line, so assign the result of the $call to a dummy variable.
Fixes issue #5426.
It is possile for `run_feature_check(pyb, args, base_path, 'float.py')` to
return `b'CRASH'`. This causes an unhandled exception in `int()`.
This commit fixes the problem by first testing for `b'CRASH'` before trying
to convert the return value to an integer.
Instances of the slice class are passed to __getitem__() on objects when
the user indexes them with a slice. In practice the majority of the time
(other than passing it on untouched) is to work out what the slice means in
the context of an array dimension of a particular length. Since Python 2.3
there has been a method on the slice class, indices(), that takes a
dimension length and returns the real start, stop and step, accounting for
missing or negative values in the slice spec. This commit implements such
a indices() method on the slice class.
It is configurable at compile-time via MICROPY_PY_BUILTINS_SLICE_INDICES,
disabled by default, enabled on unix, stm32 and esp32 ports.
This commit also adds new tests for slice indices and for slicing unicode
strings.
The existing uos.remove cannot be used to remove directories, instead
uos.rmdir is needed. And also provide uos.rename to get a good set of
filesystem functionality without requiring additional Python-level os
functions (eg using ffi).
In CPython, EnvironmentError and IOError are now aliases of OSError so no
need to have them listed in the code. OverflowError inherits from
ArithmeticError because it's intended to be raised "when the result of an
arithmetic operation is too large to be represented" (per CPython docs),
and MicroPython aims to match the CPython exception hierarchy.
For the 3 ports that already make use of this feature (stm32, nrf and
teensy) this doesn't make any difference, it just allows to disable it from
now on.
For other ports that use pyexec, this decreases code size because the debug
printing code is dead (it can't be enabled) but the compiler can't deduce
that, so code is still emitted.
The qst value is always small enough to fit in 31-bits (even less) and
using a 32-bit shift rather than a 64-bit shift reduces code size by about
300 bytes.
Most stm32 boards can now be built in nan-boxing mode via:
$ make NANBOX=1
Note that if float is enabled then it will be forced to double-precision.
Also, native emitters will be disabled.
A user-defined type that defines __iter__ doesn't need any memory to be
pre-allocated for its iterator (because it can't use such memory). So
optimise for this case by not allocating the iter-buf.
In commit 71a3d6ec3b mp_setup_code_state was
changed from a 5-arg function to a 4-arg function, and at that point 5-arg
calls in native code were no longer needed. See also commit
4f9842ad80.
The struct member "dest" should never be less than "destStart", so their
difference is never negative. Cast as such to make the comparison
explicitly unsigned, ensuring the compiler produces the correct comparison
instruction, and avoiding any compiler warnings.
Allows assigning attributes on class instances that implement their own
__setattr__. Both object.__setattr__ and super(A, b).__setattr__ will work
with this commit.
Move webrepl support code from ports/esp8266/modules into extmod/webrepl
(to be alongside extmod/modwebrepl.c), and use frozen manifests to include
it in the build on esp8266 and esp32.
A small modification is made to webrepl.py to make it work on non-ESP
ports, i.e. don't call dupterm_notify if not available.
This is an alternative to f4ed2df that adds a newline so that the output of
the test starts on a new line and the result of the test is prefixed with
"result: " to distinguish it from the test output.
Suggested-by: @dpgeorge
This reverts commit f4ed2dfa94.
This lets tinytest work as it was originally designed. An alternate
solution for the reverted commit will be implemented in a future commit.
This patch allows executing .mpy files (including native ones) directly on
a target, eg a board over a serial connection. So there's no need to copy
the file to its filesystem to test it.
For example:
$ mpy-cross foo.py
$ pyboard.py foo.mpy
- Corrected pin assignments and checked with CubeMX.
- Added additional I2C and UARTs.
- Added Ethernet interface definitions with lwIP and SSL support (but
Ethernet is currently unsupported on H7 MCUs so not fully enabled).
- Removed remarks on DFU/OCD in mpconfigboard.h because deploy-stlink works
fine too.
- Added more UARTs, I2C, corrected SPI, CAN, etc; verified against CubeMX.
- Adapted pins.csv to remove errors, add omissions, etc. according to
NUCLEO-144 User Manual.
- Changed linker file stm32f767.ld to reflect correct size of the Flash.
- Tested with LAN and SD card.
The Nucleo board does not have an SD card slot but does have the requisite
pins next to each other and labelled, so provide the configuration for
convenience.
This makes the loading of viper-code-with-relocations a bit neater and
easier to understand, by treating the rodata/bss like a special object to
be loaded into the constant table (which is how it behaves).
Because CPython 3.8.0 now produces different output:
- basics/parser.py: CPython does not allow '\\\n' as input.
- import/import_override: CPython imports _io.
We don't want to add a feature flag to .mpy files that indicate float
support because it will get complex and difficult to use. Instead the .mpy
is built using whatever precision it chooses (float or double) and the
native glue API will convert between this choice and what the host runtime
actually uses.
This commit adds a new tool called mpy_ld.py which is essentially a linker
that builds .mpy files directly from .o files. A new header file
(dynruntime.h) and makefile fragment (dynruntime.mk) are also included
which allow building .mpy files from C source code. Such .mpy files can
then be dynamically imported as though they were a normal Python module,
even though they are implemented in C.
Converting .o files directly (rather than pre-linked .elf files) allows the
resulting .mpy to be more efficient because it has more control over the
relocations; for example it can skip PLT indirection. Doing it this way
also allows supporting more architectures, such as Xtensa which has
specific needs for position-independent code and the GOT.
The tool supports targets of x86, x86-64, ARM Thumb and Xtensa (windowed
and non-windowed). BSS, text and rodata sections are supported, with
relocations to all internal sections and symbols, as well as relocations to
some external symbols (defined by dynruntime.h), and linking of qstrs.
Usage:
mpy-tool.py -o merged.mpy --merge mod1.mpy mod2.mpy
The constituent .mpy files are executed sequentially when the merged file
is imported, and they all use the same global namespace.
Implements text, rodata and bss generalised relocations, as well as generic
qstr-object linking. This allows importing dynamic native modules on all
supported architectures in a unified way.
The size of the event ringbuf was previously fixed to compile-time config
value, but it's necessary to sometimes increase this for applications that
have large characteristic buffers to read, or many events at once.
With this commit the size can be set via BLE.config(rxbuf=512), for
example. This also resizes the internal event data buffer which sets the
maximum size of incoming data passed to the event handler.
This allows the user to explicitly select the behaviour of the write to the
remote peripheral. This is needed for peripherals that have
characteristics with WRITE_NO_RESPONSE set (instead of normal WRITE). The
function's signature is now:
BLE.gattc_write(conn_handle, value_handle, data, mode=0)
mode=0 means write without response, while mode=1 means write with
response. The latter was the original behaviour so this commit is a change
in behaviour of this method, and one should specify 1 as the 4th argument
to get back the old behaviour.
In the future there could be more modes supported, such as long writes.
The default protection for the BLE ringbuf is to use
MICROPY_BEGIN_ATOMIC_SECTION, which disables all interrupts. On stm32 it
only needs to disable the lowest priority IRQ, pendsv, because that's the
IRQ level at which the BLE stack is driven.
This removes the limit on data coming in from a BLE.gattc_read() request,
or a notify with payload (coming in to a central). In both cases the data
coming in to the BLE callback is now limited only by the available data in
the ringbuf, whereas before it was capped at (default hard coded) 20 bytes.
Instead of enqueue_irq() inspecting the ringbuf to decide whether to
schedule the IRQ callback (if ringbuf is empty), maintain a flag that knows
if the callback is on the schedule queue or not. This saves about 150
bytes of code (for stm32 builds), and simplifies all uses of enqueue_irq()
and schedule_ringbuf().
qstrs in this file are always included in all builds, even if not used
anywhere. So remove those that are never needed, and make USB names
conditional on having USB enabled.
With the memcpy() call placed last it avoids the effects of registers
clobbering. It's definitely effective in non-inlined functions, but even
here it is still making a small difference. For example, on stm32, this
saves an extra `ldr` instruction to load `o->vstr` after the memcpy()
returns.
The string length being longer than the allowed qstr length can happen in
many locations, for example in the parser with very long variable names.
Without an explicit check that the length is within range (as done in this
patch) the code would exhibit crashes and strange behaviour with truncated
strings.
And return -MP_EIO if calling storage_read_block/storage_write_block fails.
This lines up with the return type and value (negative for error) of the
calls to MICROPY_HW_BDEV_READBLOCKS (and WRITEBLOCKS, and BDEV2 versions).
The pyb.Flash() class can now be used to construct objects which reference
sections of the flash storage, starting at a certain offset and going for a
certain length. Such objects also support the extended block protocol.
The signature for the constructor is: pyb.Flash(start=-1, len=-1).
This commit refactors and generalises the boot-mount routine on stm32 so
that it can mount filesystems of arbitrary type. That is, it no longer
assumes that the filesystem is FAT. It does this by using mp_vfs_mount()
which does auto-detection of the filesystem type.
The address, adv payload and uuid fields of the event are pre-allocated by
modbluetooth, and reused in the IRQ handler. Simplify this and move all
storage into the `mp_obj_bluetooth_ble_t` instance.
This now allows users to hold on to a reference to these instances without
crashes, although they may be overwritten by future events. If they want
to hold onto the values longer term they need to copy them.
Using mp_hal_delay_ms allows the scheduler to run, which might result in
another transmit operation happening, which would bypass the sleep (and
fail). Use mp_hal_delay_us instead.
The compile-time configuration value MICROPY_HW_RTC_USER_MEM_MAX can now be
used to define the amount of memory set aside for RTC.memory(). If this
value is configured to zero then the RTC.memory functionality is not
included in the build.
Remove existing scan result events from the ringbuf if the ringbuf is full
and we're trying to enqueue any other event. This is needed so that events
such as SCAN_COMPLETE are always put on the ringbuf.
The IDF heap is more fragmented with IDF 4 and mbedtls cannot allocate
enough RAM with 16+16kiB for both in and out buffers, so reduce output
buffer size.
Fixes issue #5303.
This commit removes the Makefile-level MICROPY_FATFS config and moves the
MICROPY_VFS_FAT config to the Makefile level to replace it. It also moves
the include of the oofatfs source files in the build from each port to a
central place in extmod/extmod.mk.
For a port to enabled VFS FAT support it should now set MICROPY_VFS_FAT=1
at the level of the Makefile. This will include the relevant oofatfs files
in the build and set MICROPY_VFS_FAT=1 at the C (preprocessor) level.
defindex.html (used by topindex.html) is deprecated, but topindex.html was
already identical other than setting the title, so just inherit directly
from layout.html.
This commit adds support for littlefs (v2) on all esp32 boards.
The original FAT filesystem still works and any board with a preexisting
FAT filesystem will still work as normal. It's possible to switch to
littlefs by reformatting the block device using:
import uos, flashbdev
uos.VfsLfs2.mkfs(flashbdev.bdev)
Then when the board reboots (soft or hard) the new littlefs filesystem will
be mounted. It's possible to switch back to a FAT filesystem by formatting
with uos.VfsFat.mkfs(flashbdev.bdev).
While the new manifest.py style got introduced for freezing python code
into the resulting binary, the old way - where files and modules within
ports/*/modules where baked into the resulting binary - was still
supported via `freeze('$(PORT_DIR)/modules')` within manifest.py.
However behaviour changed for symlinked directories (=modules), as those
links weren't followed anymore.
This commit restores the original behaviour by explicitly following
symlinks within a modules/ directory
This commit adds a sys.implementation.mpy entry when the system supports
importing .mpy files. This entry is a 16-bit integer which encodes two
bytes of information from the header of .mpy files that are supported by
the system being run: the second and third bytes, .mpy version, and flags
and native architecture. This allows determining the supported .mpy file
dynamically by code, and also for the user to find it out by inspecting
this value. It's further possible to dynamically detect if the system
supports importing .mpy files by `hasattr(sys.implementation, 'mpy')`.
Replace the is_running field with a tri-state variable to indicate
running/not-running/pending-exception.
Update tests to cover the various cases.
This allows cancellation in uasyncio even if the coroutine hasn't been
executed yet. Fixes#5242
This wasn't necessary as the wrapped function already has a reference to
its globals. But it had a dual purpose of tracking whether the function
was currently running, so replace it with a bool.
This commit adds an implementation of machine.Timer backed by the soft
timer mechanism. It allows an arbitrary number of timers with 1ms
resolution, with an associated Python callback. The Python-level API
matches existing ports that have a soft timer, and is used as:
from machine import Timer
t = Timer(freq=10, callback=lambda t:print(t))
...
t = Timer(mode=Timer.ONE_SHOT, period=2000, callback=lambda t:print(t))
...
t.deinit()
This commit adds an implementation of a "software timer" with a 1ms
resolution, using SysTick. It allows unlimited number of concurrent
timers (limited only by memory needed for each timer entry). They can be
one-shot or periodic, and associated with a Python callback.
There is a very small overhead added to the SysTick IRQ, which could be
further optimised in the future, eg by patching SysTick_Handler code
dynamically.
POSIX poll should always return POLLERR and POLLHUP in revents, regardless
of whether they were requested in the input events flags.
See issues #4290 and #5172.
POSIX poll should always return POLLERR and POLLHUP in revents, regardless
of whether they were requested in the input events flags.
See issues #4290 and #5172.
runtime0.h is part of the MicroPython ABI so it's simpler if it's
independent of config options, like MICROPY_PY_REVERSE_SPECIAL_METHODS.
What's effectively done here is to move MP_BINARY_OP_DIVMOD and
MP_BINARY_OP_CONTAINS up in the enum, then remove the #if
MICROPY_PY_REVERSE_SPECIAL_METHODS conditional.
Without this change .mpy files would need to have a feature flag for
MICROPY_PY_REVERSE_SPECIAL_METHODS (when embedding native code that uses
this enum).
This commit has no effect when MICROPY_PY_REVERSE_SPECIAL_METHODS is
disabled. With this option enabled this commit reduces code size by about
60 bytes.
- Adds an explicit way to set the size of a value's internal buffer,
replacing `ble.gatts_write(handle, bytes(size))` (although that
still works).
- Add an "append" mode for values, which means that remote writes
will append to the buffer.
The MP_STATE_THREAD(stack_top) is always available so use it instead of
creating a separate variable. This also allows gc_collect() to be used as
an independent function, without real_main() being called.
This commit adds helper functions to call readblocks/writeblocks with a
fourth argument, the byte offset within a block.
Although the mp_vfs_blockdev_t struct has grown here by 2 machine words, in
all current uses of this struct within this repository it still fits within
the same number of GC blocks.
When a SPI bus is initialized with a SPI host that is currently in use the
exception msg incorrectly indicates "SPI device already in use". The
mention of "device" in the exception msg is confusing because the error is
about trying to use a SPI host that is already claimed. A better exception
msg is "SPI host already in use".
For consistency with "umachine". Now that weak links are enabled
by default for built-in modules, this should be a no-op, but allows
extension of the bluetooth module by user code.
Also move registration of ubluetooth to objmodule rather than
port-specific.
This commit implements automatic module weak links for all built-in
modules, by searching for "ufoo" in the built-in module list if "foo"
cannot be found. This means that all modules named "ufoo" are always
available as "foo". Also, a port can no longer add any other weak links,
which makes strict the definition of a weak link.
It saves some code size (about 100-200 bytes) on ports that previously had
lots of weak links.
Some changes from the previous behaviour:
- It doesn't intern the non-u module names (eg "foo" is not interned),
which saves code size, but will mean that "import foo" creates a new qstr
(namely "foo") in RAM (unless the importing module is frozen).
- help('modules') no longer lists non-u module names, only the u-variants;
this reduces duplication in the help listing.
Weak links are effectively the same as having a set of symbolic links on
the filesystem that is searched last. So an "import foo" will search
built-in modules first, then all paths in sys.path, then weak links last,
importing "ufoo" if it exists. Thus a file called "foo.py" somewhere in
sys.path will still have precedence over the weak link of "foo" to "ufoo".
See issues: #1740, #4449, #5229, #5241.
NimBLE doesn't actually copy this data, it requires it to stay live.
Only dereference when we register a new set of services.
Fixes#5226
This will allow incrementally adding services in the future, so
rename `reset` to `append` to make it clearer.
Internally change the representation of UUIDs to LE uint8* to simplify this.
This allows UUIDs to be easily used in BLE payloads (such as advertising).
Ref: #5186
When loading a manifest file, e.g. by include(), it will chdir first to the
directory of that manifest. This means that all file operations within a
manifest are relative to that manifest's location.
As a consequence of this, additional environment variables are needed to
find absolute paths, so the following are added: $(MPY_LIB_DIR),
$(PORT_DIR), $(BOARD_DIR). And rename $(MPY) to $(MPY_DIR) to be
consistent.
Existing manifests are updated to match.
Prior to this commit the systick IRQ priority was set at lowest priority on
F0/L0/WB MCUs, because it was left at the default and never configured.
This commit ensures the priority is configured and sets it to the highest
priority.
Remove the 240MHz CPU config option from sdkconfig.base and create a new
sdkconfig.240mhz file for those boards that want to use 240MHz on boot.
The default CPU frequency is now 160MHz (was 240MHz), to align with the ESP
IDF and support more boards (eg those with D2WD chips).
Fixes issue #5169.
DS1822P sensors behave just like the DS18B20 except for the following:
- it has a different family code: 0x22
- it has only the GND and DQ pins connected, it uses parasitic power from
the data line
Contributed by @nebelgrau77.
This introduces a new build variable FROZEN_MANIFEST which can be set to a
manifest listing (written in Python) that describes the set of files to be
frozen in to the firmware.
This prevents issues with concurrent access to the ringbuf.
MICROPY_BEGIN_ATOMIC_SECTION is only atomic to the same core. We could
address this with a mutex, but it's also not safe to call mp_sched_schedule
across cores.
This avoids a confusing ENOMEM raised from gap_advertise if there is
currently an active connection. This refers to the static connection
buffer pre-allocated by Nimble (nothing to do with MicroPython heap
memory).
Instead of encoding 4 zero bytes as placeholders for the simple_name and
source_file qstrs, and storing the qstrs after the bytecode, store the
qstrs at the location of these 4 bytes. This saves 4 bytes per bytecode
function stored in a .mpy file (for example lcd160cr.mpy drops by 232
bytes, 4x 58 functions). And resulting code size is slightly reduced on
ports that use this feature.
This is to more accurately match the BLE spec, where intervals are
configured in units of channel hop time (625us). When it was
specified in ms, not all "valid" intervals were able to be
specified.
Now that we're also allowing configuration of scan interval, this
commit updates advertising to match.
This adds two additional optional kwargs to `gap_scan()`:
- `interval_us`: How long between scans.
- `window_us`: How long to scan for during a scan.
The default with NimBLE is a 11.25ms window with a 1.28s interval.
Changing these parameters is important for detecting low-frequency
advertisements (e.g. beacons).
Note: these params are in microseconds, not milliseconds in order
to allow the 625us granularity offered by the spec.
This patch add basic building blocks for nrf9P60.
It also includes a secure bootloader which forwards all
possible peripherals that are user selectable to become
non-secure. After configuring Flash, RAM and peripherals
the secure bootloader will jump to the non-secure domain
where MicroPython is placed.
The minimum size of a secure boot has to be a flash
block of 32Kb, hence why the linker scripts are
offsetting the main application this much.
The RAM offset is set to 128K, to allow for later
integration of Nordic Semiconductor's BSD socket
library which reserves the range 0x20010000 - 0x2001FFFF.
Add support for pca10059 with REPL over tinyusb USB CDC.
The board also includes a board specific module that will
recover UICR->REGOUT0 in case this has been erased.
This initial support does not preserve any existing bootloader
on the pca10090 in case this was present, and expects to use all
available flash on the device.
Add nrf-port finyusb driver files. USB CDC can be activated
by board configuration files using the MICROPY_HW_USB_CDC.
Updating BLE driver, Makefile, nrfx-glue and main.c to plug
in the tinyusb stack.
The specific board can be selected with the BOARD makefile variable. This
defaults (if not specified) to BOARD=GENERIC, which is the original default
firmware build. For the 512k target use BOARD=GENERIC_512K.
On other ports (e.g. ESP32) they provide a complete Nimble implementation
(i.e. we don't need to use the code in extmod/nimble). This change
extracts out the bits that we don't need to use in other ports:
- malloc/free/realloc for Nimble memory.
- pendsv poll handler
- depowering the cywbt
Also cleans up the root pointer management.
In which case place the native function prelude in a bytes object, linked
from the const_table of that function. An architecture should define
N_PRELUDE_AS_BYTES_OBJ to 1 before including py/emitnative.c to emit
correct machine code, then enable MICROPY_EMIT_NATIVE_PRELUDE_AS_BYTES_OBJ
so the runtime can correctly handle the prelude being in a bytes object.
Such that args/return regs for the parent are different to args/return regs
for child calls. For an architecture to use this feature it should define
the REG_PARENT_xxx macros before including py/emitnative.c.
Prior to this commit, when unwinding through an active finally the stack
was not being correctly popped/folded, which resulting in the VM crashing
for complicated unwinding of nested finallys.
This should be fixed with this commit, and more tests for return/break/
continue within a finally have been added to exercise this.
STM32F0 has PCLK=48MHz and maximum ADC clock is 14MHz so use PCLK/4=12MHz
to stay within spec of the ADC peripheral. In pyb.ADC set common sampling
time to approx 4uS for internal and external sources. In machine.ADC
reduce sample time to approx 1uS for external source, leave internal at
maximum sampling time.
As of 7d58a197cf, `NULL` should no longer be
here because it's allowed (MP_QSTRnull took its place). This entry was
preventing the use of MP_QSTR_NULL to mean "NULL" (although this is not
currently used).
A blacklist should not be needed because it should be possible to intern
all strings.
Fixes issue #5140.
This check follows CPython's behaviour, because 'import *' always populates
the globals with the imported names, not locals.
Since it's safe to do this (doesn't lead to a crash or undefined behaviour)
the check is only enabled for MICROPY_CPYTHON_COMPAT.
Fixes issue #5121.
This commit adds the option to use HSE or MSI system clock, and LSE or LSI
RTC clock, on L4 MCUs.
Note that prior to this commit the default clocks on an L4 part were MSI
and LSE. The defaults are now MSI and LSI.
In mpconfigboard.h select the clock source via:
#define MICROPY_HW_RTC_USE_LSE (0) or (1)
#define MICROPY_HW_CLK_USE_HSE (0) or (1)
and the PLLSAI1 N,P,Q,R settings:
#define MICROPY_HW_CLK_PLLSAIN (12)
#define MICROPY_HW_CLK_PLLSAIP (RCC_PLLP_DIV7)
#define MICROPY_HW_CLK_PLLSAIQ (RCC_PLLQ_DIV2)
#define MICROPY_HW_CLK_PLLSAIR (RCC_PLLR_DIV2)
The the nrfx driver is aware of chip specific registers, while
the raw HAL abstraction is not. This driver enables use of NVMC
in non-secure domain for nrf9160.
This patch moves the check for MICROPY_PY_MACHINE_TEMP to come
before the inclusion of nrf_temp.h. The nrf_temp.h depends on
the NRF_TEMP_Type which might not be defined for all nRF devices.
This patch compresses the second part of the bytecode prelude which
contains the source file name, function name, source-line-number mapping
and cell closure information. This part of the prelude now begins with a
single varible length unsigned integer which encodes 2 numbers, being the
byte-size of the following 2 sections in the header: the "source info
section" and the "closure section". After decoding this variable unsigned
integer it's possible to skip over one or both of these sections very
easily.
This scheme saves about 2 bytes for most functions compared to the original
format: one in the case that there are no closure cells, and one because
padding was eliminated.
The start of the bytecode prelude contains 6 numbers telling the amount of
stack needed for the Python values and exceptions, and the signature of the
function. Prior to this patch these numbers were all encoded one after the
other (2x variable unsigned integers, then 4x bytes), but using so many
bytes is unnecessary.
An entropy analysis of around 150,000 bytecode functions from the CPython
standard library showed that the optimal Shannon coding would need about
7.1 bits on average to encode these 6 numbers, compared to the existing 48
bits.
This patch attempts to get close to this optimal value by packing the 6
numbers into a single, varible-length unsigned integer via bit-wise
interleaving. The interleaving scheme is chosen to minimise the average
number of bytes needed, and at the same time keep the scheme simple enough
so it can be implemented without too much overhead in code size or speed.
The scheme requires about 10.5 bits on average to store the 6 numbers.
As a result most functions which originally took 6 bytes to encode these 6
numbers now need only 1 byte (in 80% of cases).
For use with F0 MCUs that don't have HSI48. Select the clock source
explicitly in mpconfigboard.h.
On the NUCLEO_F091RC board use HSE bypass when HSE is chosen because the
NUCLEO clock source is STLINK not a crystal.
Before this patch the UART baudrate on F0 MCUs was wrong because the
stm32lib SystemCoreClockUpdate sets SystemCoreClock to 8MHz instead of
48MHz if HSI48 is routed directly to SYSCLK.
The workaround is to use HSI48 -> PREDIV (/2) -> PLL (*2) -> SYSCLK.
Fixes issue #5049.
From the beginning of this project the RAISE_VARARGS opcode was named and
implemented following CPython, where it has an argument (to the opcode)
counting how many args the raise takes:
raise # 0 args (re-raise previous exception)
raise exc # 1 arg
raise exc from exc2 # 2 args (chained raise)
In the bytecode this operation therefore takes 2 bytes, one for
RAISE_VARARGS and one for the number of args.
This patch splits this opcode into 3, where each is now a single byte.
This reduces bytecode size by 1 byte for each use of raise. Every byte
counts! It also has the benefit of reducing code size (on all ports except
nanbox).
To make progress towards MicroPython supporting Python 3.5, adding the
matmul operator is important because it's a really "low level" part of the
language, being a new token and modifications to the grammar.
It doesn't make sense to make it configurable because 1) it would make the
grammar and lexer complicated/messy; 2) no other operators are
configurable; 3) it's not a feature that can be "dynamically plugged in"
via an import.
And matmul can be useful as a general purpose user-defined operator, it
doesn't have to be just for numpy use.
Based on work done by Jim Mussared.
Enabled by default, but disabled when REPL is connected to the VCP (this is
the existing behaviour). Can be configured at run-time with, eg:
pyb.USB_VCP().init(flow=pyb.USB_VCP.RTS | pyb.USB_VCP.CTS)
The new fdcan.c file provides the low-level C interface to the FDCAN
peripheral, and pyb_can.c is updated to support both traditional CAN and
FDCAN, depending on the MCU being compiled for.
Add the project file to the mpy-cross directory, which is also where the
executable ends up, and change the Appveyor settings to build mpy-cross
with both msvc and mingw-w64 and verify this all works by running tests
with --via-mpy.
If this is not set it might default to calls to open() to use text mode
which is usually not wanted, and even wrong and leading to incorrect
results when loading binary .mpy files.
This also means that text files written and read will not have line-ending
translation from \n to \r\n and vice-versa anymore. This shouldn't be much
of a problem though since most tools dealing with text files adapt
automatically to any of the 2 formats.
Reserve sources.props for listing just the MicroPython core and extmod
files, similar to how py.mk lists port-independent source files. This
allows reusing the source list, for instance for building mpy-cross. The
sources for building the executable itself are listed in the corresponding
project file, similar to how the other ports specify the source files in
their Makefile.
Append to PyIncDirs, used to define include directories specific to
MicroPython, instead of just overwriting it so project files importing this
file can define additional directories. And allow defining the target
directory for the executable instead of hardcoding it to the windows
directory. Main reason for this change is that it will allow building
mpy-cross with msvc.
We want the .vcxproj to be just a container with the minimum content for
making it work as a project file for Visual Studio and MSBuild, whereas the
actual build options and actions get placed in separate reusable files.
This was roughly the case already except some compiler options were
overlooked; fix this here: we'll need those common options when adding a
project file for building mpy-cross.
These were probably added to detect more qstrs but as long as the
micropython executable itself doesn't use the same build options the qstrs
would be unused anyway. Furthermore these definitions are for internal use
and get enabled when corresponding MICROPY_EMIT_XXX are defined, in which
case the compiler would warn about symbol redefinitions since they'd be
defined both here and in the source.
This commit adds support for a second supported hash (currently set to the
4.0-beta1 tag). When this hash is detected, the relevant changes are
applied.
This allows to start using v4 features (e.g. BLE with Nimble), and also
start doing testing, while still supporting the original, stable, v3.3 IDF.
Note: this feature is experimental, not well tested, and network.LAN and
network.PPP are currently unsupported.
This option affects py/vm.c and py/gc.c and using -Os gets them compiling a
bit smaller, and small firmware is the aim of these two ports. Also,
having these files compiled with -Os on these ports, and -O3 as the default
on other ports, gives a better understanding of code-size changes when
making changes to these files.
According to the schematic, the SDRAM part on this board is a
MT48LC4M32B2B5-6A, with "Row addressing 4K A[11:0]" (per datasheet). This
commit updates mpconfigboard.h from 13 to 12 to match.
This patch uses the newly-added esp32.Partition class to replace the
existing FlashBdev class. Partition objects implement the block protocol
so can be directly mounted via uos.mount(). This has the following
benefits:
- allows the filesystem partition location and size to be specified in
partitions.csv, and overridden by a particular board
- very easily allows to have multiple filesystems by simply adding extra
entries to partitions.csv
- improves efficiency/speed of filesystem operations because the block
device is implemented fully in C
- opens the possibility to have encrypted flash storage (since Partitions
can be encrypted)
Note that this patch is fully backwards compatible: existing filesystems
remain untouched and work with this new code.
- STM32F407VGT6 (1MB of Flash, 192+4 Kbytes of SRAM)
- 5V (via USB) or Li-Polymer Battery (3.7V) power input
- 2 x LEDs
- 2 x user switches
- 2 x mikroBUS sockets
- 2 x 1x26 mikromedia-compatible headers (52 pins)
https://www.mikroe.com/clicker-2-stm32f4
Mboot currently requires at least three LEDs to display each of the four
states. However, since there are only four possible states, the states can
be displayed via binary counting on only 2 LEDs (if only 2 are available).
The existing patterns are still used for 3 or 4 LEDs.
It was previously not taking into account that the list of pins was sparse,
so using the wrong index. The boards/X/pins.csv was generating the wrong
data for machine.Pin.board.
As part of this fix rename the variables to make it more clear what the
list contains (only board pins).
Prior to this patch mp_opcode_format would calculate the incorrect size of
the MP_BC_UNWIND_JUMP opcode, missing the additional byte. But, because
opcodes below 0x10 are unused and treated as bytes in the .mpy load/save
and freezing code, this bug did not show any symptoms, since nested unwind
jumps would rarely (if ever) reach a depth of 16 (so the extra byte of this
opcode would be between 0x01 and 0x0f and be correctly loaded/saved/frozen
simply as an undefined opcode).
This patch fixes this bug by correctly accounting for the additional byte.
.
With this patch alignment is done relative to the start of the buffer that
is being unpacked, not the raw pointer value, as per CPython.
Fixes issue #3314.
This commit adds support for sys.settrace, allowing to install Python
handlers to trace execution of Python code. The interface follows CPython
as closely as possible. The feature is disabled by default and can be
enabled via MICROPY_PY_SYS_SETTRACE.
Prior to this patch the line number for a lambda would be "line 1" if the
body of the lambda contained only a simple expression (with no line number
stored in the parse node). Now the line number is always reported
correctly.
mp_compile no longer takes an emit_opt argument, rather this setting is now
provided by the global default_emit_opt variable.
Now, when -X emit=native is passed as a command-line option, the emitter
will be set for all compiled modules (included imports), not just the
top-level script.
In the future there could be a way to also set this variable from a script.
Fixes issue #4267.
With this patch exceptions that are re-raised have improved tracebacks
(less confusing, match CPython), and it makes re-raise slightly more
efficient (in time and RAM) because they no longer need to add a traceback.
Also general VM performance is not measurably affected.
Partially fixes issue #2928.
With this patch exception tracebacks that go through a finally are improved
(less confusing, match CPython), and it makes finally's slightly more
efficient (in time and RAM) because they no longer need to add a traceback.
Partially fixes issue #2928.
As per the README.md of the upstream source at
https://github.com/B-Con/crypto-algorithms, this source code was released
into the public domain, so make that explicit in the copyright line in the
header.
It's really an opcode that's not implemented, so use "opcode" instead of
"byte code". And remove the redundant "not implemented" text because that
is already implied by the exception type. There's no need to have a long
error message for an exception that is almost never encountered. Saves
about 20 bytes of code size on most ports.
- Split 'qemu-arm' from 'unix' for generating tests.
- Add frozen module to the qemu-arm test build.
- Add test that reproduces the requirement to half-word align native
function data.
Enabled via MICROPY_PY_URE_DEBUG, disabled by default (but enabled on unix
coverage build). This is a rarely used feature that costs a lot of code
(500-800 bytes flash). Debugging of regular expressions can be done
offline with other tools.
Recent versions of gcc perform optimisations which can lead to the
following code from the MP_NLR_JUMP_HEAD macro being omitted:
top->ret_val = val; \
MP_NLR_RESTORE_PYSTACK(top); \
*_top_ptr = top->prev; \
This is noticeable (at least) in the unix coverage on x86-64 built with gcc
9.1.0. This is because the nlr_jump function is marked as no-return, so
gcc deduces that the above code has no effect.
Adding MP_UNREACHABLE tells the compiler that the asm code may branch
elsewhere, and so it cannot optimise away the code.
As per PEP 485, this function appeared in for Python 3.5. Configured via
MICROPY_PY_MATH_ISCLOSE which is disabled by default, but enabled for the
ports which already have MICROPY_PY_MATH_SPECIAL_FUNCTIONS enabled.
Before this patch I2C transactions using a hardware I2C peripheral on F0/F7
MCUs would not correctly generate the I2C restart condition, and instead
would generate a stop followed by a start. This is because the CR2 AUTOEND
bit was being set before CR2 START when the peripheral already had the I2C
bus from a previous transaction that did not generate a stop.
As a consequence all combined transactions, eg read-then-write for an I2C
memory transfer, generated a stop condition after the first transaction and
didn't generate a stop at the very end (but still released the bus). Some
I2C devices require a repeated start to function correctly.
This patch fixes this by making sure the CR2 AUTOEND bit is set after the
start condition and slave address have been fully transferred out.
Replaces the `SDKCONFIG` makefile variable with `BOARD`. Defaults to
BOARD=GENERIC. spiram can be enabled with `BOARD=GENERIC_SPIRAM`
Add example definition for TINYPICO, currently identical to GENERIC_SPIRAM
but with custom board/SoC names for the uPy banner.
Prior to this patch the amount of free space in an array (including
bytearray) was not being maintained correctly for the case of slice
assignment which changed the size of the array. Under certain cases (as
encoded in the new test) it was possible that the array could grow beyond
its allocated memory block and corrupt the heap.
Fixes issue #4127.
This patch implements a new sys.atexit function which registers a function
that is later executed when the main script ends. It is configurable via
MICROPY_PY_SYS_ATEXIT, disabled by default.
This is not compliant with CPython, rather it can be used to implement a
CPython compatible "atexit" module if desired (similar to how
sys.print_exception can be used to implement functionality of the
"traceback" module).
They are both enabled by default, but can be disabled by defining
MICROPY_HW_ENABLE_MDNS_QUERIES and/or MICROPY_HW_ENABLE_MDNS_RESPONDER to
0. The hostname for the responder is currently taken from
tcpip_adapter_get_hostname() but should eventually be configurable.
This commit adds the connect() method to the PPP interface and requires
that connect() be called after active(1). This is a breaking change for
the PPP API.
With the connect() method it's now possible to pass in authentication
information for PAP/CHAP, eg:
ppp.active(1)
ppp.connect(authmode=ppp.AUTH_PAP, username="user", "password="password")
If no authentication is needed simply call connect() without any
parameters. This will get the original behaviour of calling active(1).
Some SD/MMC breakout boards don't support 4-bit bus mode. This adds a new
macro MICROPY_HW_SDMMC_BUS_WIDTH that allows each board to define the width
of the SD/MMC bus interface used on that board, defaulting to 4 bits.
This patch adds a simple but powerful hook into the import system, in a
CPython compatible way, by allowing to override builtins.__import__.
This does introduce some overhead to all imports but it's minor:
- the dict lookup of __import__ is bypassed if there are no modifications
to the builtins module (which is the case at start up);
- imports are not performance critical, usually done just at the start of a
script;
- compared to how much work is done in an import, looking up a value in a
dict is a relatively small additional piece of work.
JSON requires that keys of objects be strings. CPython will therefore
automatically quote simple types (NoneType, bool, int, float) when they are
used directly as keys in JSON output. To prevent subtle bugs and emit
compliant JSON, MicroPython should at least test for such keys so they
aren't silently let through. Then doing the actual quoting is a similar
cost to raising an exception, so that's what is implemented by this patch.
Fixes issue #4790.
The previous version did not work on MCUs that only had USB device mode
(compared to OTG) because of the handling of NAK. And this previous
handling of NAK had a race condition where a new packet could come in
before USBD_HID_SetNAK was called (since USBD_HID_ReceivePacket clears NAK
as part of its operation). Furthermore, the double buffering of incoming
reports was not working, only one buffer could be used at a time.
This commit rewrites the HID interface code to have a single incoming
buffer, and only calls USBD_HID_ReceivePacket after the user has read the
incoming report (similar to how the VCP does its flow control). As such,
USBD_HID_SetNAK and USBD_HID_ClearNAK are no longer needed.
API functionality from the user's point of view should be unchanged with
this commit.
Use "-f" to select filesystem mode, followed by the command to execute.
Optionally put ":" at the start of a filename to indicate that it's on the
remote device, if it would otherwise be ambiguous.
Examples:
$ pyboard.py -f ls
$ pyboard.py -f cat main.py
$ pyboard.py -f cp :main.py . # get from device
$ pyboard.py -f cp main.py : # put to device
$ pyboard.py -f rm main.py
On this port the GIL is enabled and everything works under the assumption
of the GIL, ie that a given task has exclusive access to the uPy state, and
any ISRs interrupt the current task and therefore the ISR inherits
exclusive access to the uPy state for the duration of its execution.
If the MicroPython tasks are not pinned to a specific core then an ISR may
be executed on a different core to the task, making it possible for the
main task and an ISR to execute in parallel, breaking the assumption of the
GIL.
The easiest and safest fix for this is to pin all MicroPython related code
to the same CPU core, as done by this patch. Then any ISR that accesses
MicroPython state must be registered from a MicroPython task, to ensure it
is invoked on the same core.
See issue #4895.
The C++ standard forbids redefining keywords, like inline and alignof, so
guard these definitions to avoid that, allowing to include the MicroPython
headers by C++ code.
This new series of MCUs is similar to the L4 series with an additional
Cortex-M0 coprocessor. The firmware for the wireless stack must be managed
separately and MicroPython does not currently interface to it. Supported
features so far include: RTC, UART, USB, internal flash filesystem.
Behaviour was changed from stack to queue in
8977c7eb58, and this updates variable names
to match. Also updates other references (docs, error messages).
The new configurations MICROPY_HW_USB_MSC and MICROPY_HW_USB_HID can be
used by a board to enabled or disable MSC and/or HID. They are both
enabled by default.
__clear_cache causes a compile error when using clang. Instead use
__builtin___clear_cache which is available under both gcc and clang.
Also replace tabs with spaces in this section of code (introduced by a
previous commit).
In a non-thread build, using &_ram_end as the top-of-stack is no longer
correct because the stack is not always at the very top end of RAM. See
eg 04c7cdb668 and
3786592097. The correct value to use is
&_estack, which is the value stored in MP_STATE_THREAD(stack_top), and
using the same code for both thread and non-thread builds makes the code
cleaner.
stm32lib now provides system_stm32XXxx.c source files for all MCU variants,
which includes SystemInit and prescaler tables. Since these are quite
standard and don't need to be changed, switch to use them instead of custom
variants, making the start-up code cleaner.
The SystemInit code in stm32lib was checked and is equivalent to what is
removed from the stm32 port in this commit.
Without this you often don't get any DNS server from your network provider.
Additionally, setting your own DNS _does not work_ without this option set
(which could be a bug in the PPP stack).
This is a start to make a more consistent machine.RTC class across ports.
The stm32 pyb.RTC class at least has the datetime() method which behaves
the same as esp8266 and esp32, and with this patch the ntptime.py script
now works with stm32.
The helper function exec_user_callback executes within the context of an
lwIP C callback, and the user (Python) callback to be scheduled may want to
perform further TCP/IP actions, so the latter should be scheduled to run
outside the lwIP context (otherwise it's effectively a "hard IRQ" and such
callbacks have lots of restrictions).
If tcp_write returns ERR_MEM then it's not a fatal error but instead means
the caller should retry the write later on (and this is what lwIP's netconn
API does).
This fixes problems where a TCP send would raise OSError(ENOMEM) in
situations where the TCP/IP stack is under heavy load. See eg issues #1897
and #1971.
If both FS and HS USB peripherals are enabled for a board then the active
one used for the REPL will now be auto-detected, by checking to see if both
the DP and DM lines are actively pulled low. By default the code falls
back to use MICROPY_HW_USB_MAIN_DEV if nothing can be detected.
When going out of memory-mapped mode to do a control transfer to the QSPI
flash, the MPU settings must be changed to forbid access to the memory
mapped region. And any ongoing transfer (eg memory mapped continuous read)
must be aborted.
The Cortex-M7 CPU will do speculative loads from any memory location that
is not explicitly forbidden. This includes the QSPI memory-mapped region
starting at 0x90000000 and with size 256MiB. Speculative loads to this
QSPI region may 1) interfere with the QSPI peripheral registers (eg the
address register) if the QSPI is not in memory-mapped mode; 2) attempt to
access data outside the configured size of the QSPI flash when it is in
memory-mapped mode. Both of these scenarios will lead to issues with the
QSPI peripheral (eg Cortex bus lock up in scenario 2).
To prevent such speculative loads from interfering with the peripheral the
MPU is configured in this commit to restrict access to the QSPI mapped
region: when not memory mapped the entire region is forbidden; when memory
mapped only accesses to the valid flash size are permitted.
This fixes compiling for older architectures (e.g. armv5tej).
According to [1], the limit of R0-R7 for the STR and LDR instructions is
tied to the Thumb instruction set and not any specific processor
architectures.
[1]: http://www.keil.com/support/man/docs/armasm/armasm_dom1361289906890.htm
When compiled with hard float the system should enable FP access when it
starts or else FP instructions lead to a fault. But this minimal port does
not enable (or use) FP and so, to keep it minimal, switch to use soft
floating point. (This became an issue due to the recent commit
34c04d2319 which saves/restores FP registers
in the NLR state.)
misc_aes.py and misc_mandel.py are adapted from sources in this repository.
misc_pystone.py is the standard Python pystone test. misc_raytrace.py is
written from scratch.
This benchmarking test suite is intended to be run on any MicroPython
target. As such all tests are parameterised with N and M: N is the
approximate CPU frequency (in MHz) of the target and M is the approximate
amount of heap memory (in kbytes) available on the target. When running
the benchmark suite these parameters must be specified and then each test
is tuned to run on that target in a reasonable time (<1 second).
The test scripts are not standalone: they require adding some extra code at
the end to run the test with the appropriate parameters. This is done
automatically by the run-perfbench.py script, in such a way that imports
are minimised (so the tests can be run on targets without filesystem
support).
To interface with the benchmarking framework, each test provides a
bm_params dict and a bm_setup function, with the later taking a set of
parameters (chosen based on N, M) and returning a pair of functions, one to
run the test and one to get the results.
When running the test the number of microseconds taken by the test are
recorded. Then this is converted into a benchmark score by inverting it
(so higher number is faster) and normalising it with an appropriate factor
(based roughly on the amount of work done by the test, eg number of
iterations).
Test outputs are also compared against a "truth" value, computed by running
the test with CPython. This provides a basic way of making sure the test
actually ran correctly.
Each test is run multiple times and the results averaged and standard
deviation computed. This is output as a summary of the test.
To make comparisons of performance across different runs the
run-perfbench.py script also includes a diff mode that reads in the output
of two previous runs and computes the difference in performance. Reports
are given as a percentage change in performance with a combined standard
deviation to give an indication if the noise in the benchmarking is less
than the thing that is being measured.
Example invocations for PC, pyboard and esp8266 targets respectively:
$ ./run-perfbench.py 1000 1000
$ ./run-perfbench.py --pyboard 100 100
$ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
With both MICROPY_PERSISTENT_CODE_SAVE and MICROPY_PERSISTENT_CODE_LOAD
enabled the code fails to compile, due to undeclared 'n_obj'. If
MICROPY_EMIT_NATIVE is disabled there are more errors due to the use of
undefined fields in mp_raw_code_t.
This patch fixes such compilation by avoiding undefined fields.
MICROPY_EMIT_NATIVE was changed to MICROPY_EMIT_MACHINE_CODE in this file
to match the mp_raw_code_t definition.
Change static LED functions to lowercase names, and trim down source code
lines for variants of MICROPY_HW_LED_COUNT. Also rename configuration for
MICROPY_HW_LEDx_LEVEL to MICROPY_HW_LEDx_PULLUP to align with global PULLUP
configuration.
Commit 9e68eec8ea introduced a regression
where the PID of the USB device would be 0xffff if the default value was
used. This commit fixes that by using a signed int type.
This saves time when building on Travis CI: unconditionally fetching all
submodules takes about 40 seconds, but not all are needed for any given
port, so only fetch as necessary.
Entering a bootloader (ST system bootloader, or custom mboot) from software
by directly branching to it is not reliable, and the reliability of it
working can depend on the peripherals that were enabled by the application
code. It's also not possible to branch to a bootloader if the WDT is
enabled (unless the bootloader has specific provisions to feed the WDT).
This patch changes the way a bootloader is entered from software by first
doing a complete system reset, then branching to the desired bootloader
early on in the start-up process. The top two words of RAM (of the stack)
are reserved to store flags indicating that the bootloader should be
entered after a reset.
WIFI_REASON_AUTH_FAIL does not necessarily mean the password is wrong, and
a wrong password may not lead to a WIFI_REASON_AUTH_FAIL error code. So to
improve reliability connecting to a WLAN always reconnect regardless of the
error.
These s16-s21 registers are used by gcc so need to be saved. Future
versions of gcc (beyond v9.1.0), or other compilers, may eventually need
additional registers saved/restored.
See issue #4844.
This updates ESP IDF to use v3.3-beta3. And also adjusts README.md to
point to stable docs which provide a link to download the correct toolchain
for this IDF version, namely 1.22.0-80-g6c4433a-5.2.0
Previously the end of the heap was the start (lowest address) of the stack.
With the changes in this commit these addresses are now independent,
allowing a board to place the heap and stack in separate locations.
With this the user can select multiple logical units to expose over USB MSC
at once, eg: pyb.usb_mode('VCP+MSC', msc=(pyb.Flash(), pyb.SDCard())). The
default behaviour is the original behaviour of just one unit at a time.
Eventually these responses could be filled in by a function to make their
contents dynamic, depending on the attached logical units. But for now
they are fixed, and this patch fixes the MODE SENSE(6) responses so it is
the correct length with the correct header.
SCSI can support multiple logical units over the one interface (in this
case over USBD MSC) and here the MSC code is reworked to support this
feature. At this point only one LU is used and the behaviour is mostly
unchanged from before, except the INQUIRY result is different (it will
report "Flash" for both flash and SD card).
Previously, when linking qstr objects in native code for ARM Thumb, the
index into the machine code was being incremented by 4, not 8. It should
be 8 to account for the size of the two machine instructions movw and movt.
This patch makes sure the index into the machine code is incremented by the
correct amount for all variations of qstr linking.
See issue #4829.
To use it a board should define MICROPY_PY_USSL=1 and MICROPY_SSL_MBEDTLS=1
at the Makefile level. With the provided configuration it adds about 64k
to the build.
Setting MICROPY_PY_USSL and MICROPY_SSL_MBEDTLS at the Makefile-level will
now build mbedTLS from source and include it in the build, with the ussl
module using this TLS library. Extra settings like MBEDTLS_CONFIG_FILE may
need to be provided by a given port.
If a port wants to use its own mbedTLS library then it should not set
MICROPY_SSL_MBEDTLS at the Makefile-level but rather set it at the C level,
and provide the library as part of the build in its own way (see eg esp32
port).
Instead of converting to a small-int at runtime this can be done at compile
time, then we only have a simple comparison during runtime. This reduces
code size on some ports (e.g -4 on qemu-arm, -52 on unix nanbox), and for
others at least doesn't increase code size.
Fixes errors in the tool when 1) linking qstrs in native ARM-M code; 2)
freezing multiple files some of which use native code and some which don't.
Fixes issue #4829.
It doesn't work to tie the polling of an underlying NIC driver (eg to check
the NIC for pending Ethernet frames) with its associated lwIP netif. This
is because most NICs are implemented with IRQs and don't need polling,
because there can be multiple lwIP netif's per NIC driver, and because it
restricts the use of the netif->state variable. Instead the NIC should
have its own specific way of processing incoming Ethernet frame.
This patch removes this generic NIC polling feature, and for the only
driver that uses it (Wiznet5k) replaces it with an explicit call to the
poll function (which could eventually be improved by using a proper
external interrupt).
This adds support for SD cards using the ESP32's built-in hardware SD/MMC
host controller, over either the SDIO bus or SPI. The class is available
as machine.SDCard and using it can be as simple as:
uos.mount(machine.SDCard(), '/sd')
If the board-pin name is left empty then only the cpu-pin name is used, eg
",PA0". If the board-pin name starts with a hyphen then it's available as
a C definition but not in the firmware, eg "-X1,PA0".
The patch solves the problem where multiple Timer objects (e.g. multiple
Timer(0) instances) could initialise multiple handles to the same internal
timer. The list of timers is now maintained not for "active" timers (where
init is called), but for all timers created. The timers are only removed
from the list of timers on soft-reset (machine_timer_deinit_all).
Fixes#4078.
The board config option MICROPY_HW_USB_ENABLE_CDC2 is now changed to
MICROPY_HW_USB_CDC_NUM, and the latter should be defined to the maximum
number of CDC interfaces to support (defaults to 1).
mpy-cross uses MICROPY_DYNAMIC_COMPILER and MICROPY_EMIT_NATIVE but does
not actually need to execute native functions, and does not need
mp_fun_table. This commit makes it so mp_fun_table and all its entries are
not built when MICROPY_DYNAMIC_COMPILER is enabled, significantly reducing
the size of the mpy-cross executable and allowing it to be built on more
machines/OS's.
In d5f0c87bb9 this call to tcp_poll() was
added to put a timeout on closing TCP sockets. But after calling
tcp_close() the PCB may be freed and therefore invalid, so tcp_poll() can
not be used at that point. As a fix this commit calls tcp_poll() before
closing the TCP PCB. If the PCB is subsequently closed and freed by
tcp_close() or tcp_abort() then the PCB will not be on any active list and
the callback will not be executed, which is the desired behaviour (the
_lwip_tcp_close_poll() callback only needs to be called if the PCB remains
active for longer than the timeout).
Commit 2848a613ac introduced a bug where
lwip_socket_free_incoming() accessed pcb.tcp->state after the PCB was
closed. The state may have changed due to that close call, or the PCB may
be freed and therefore invalid. This commit fixes that by calling
lwip_socket_free_incoming() before the PCB is closed.
Set the active MPU region to the actual size of SDRAM configured and
invalidate the rest of the memory-mapped region, to prevent errors due to
CPU speculation. Also update the attributes of the SDRAM region as per ST
recommendations, and change region numbers to avoid conflicts elsewhere in
the codebase (see eth usage).
I2C can't be enabled in prj_base.conf because it's a board-specific
feature. For example, if a board doesn't have I2C but CONFIG_I2C=y then
the build will fail (on Zephyr build system side). The patch here gets the
qemu_cortex_m3 build working again.
This enables going back to previous wrapped lines using backspace or left
arrow: instead of just sticking to the beginning of a line, the cursor will
move a line up.
This ; make Windows compilation fail with GNU makefile 4.2.1. It was added
in 0dc85c9f86 as part of a shell if-
statement, but this if-statement was subsequently removed in
23a693ec2d so the semicolon is not needed.
The variable $(TOUCH) is initialized with the "touch" value in mkenv.mk
like for the other command line tools (rm, echo, cp, mkdir etc). With
this, for example, Windows users can specify the path of touch.exe.
Updating the nrfx git submodule containing HAL drivers for nrf-port
from v1.3.1 to current master. The version pointed to is one commit
ahead of v1.7.1 release. The extra commit contains a bugfix for
nrfx_uart_tx_in_progress() making it report correctly.
The general upgrade of nrfx is considered to be safe, as almost all
changes in between 1.3.1 and 1.7.1 are related to peripherals and
target devices not used by the nrf-port as of today.
The variable $(CAT) is initialised with the "cat" value in mkenv.mk like
for the other command line tools (rm, echo, cp, mkdir etc). With this,
for example, Windows users can specify the path of cat.exe.
Reuse the implementation for bytes since it works the same way regardless
of the underlying type. This method gets added for CPython compatibility
of bytearray, but to keep the code simple and small array.array now also
has a working decode method, which is non-standard but doesn't hurt.
On MCUs that have an I2C TIMINGR register, this can now be explicitly set
via the "timingr" keyword argument to the I2C constructor, for both
machine.I2C and pyb.I2C. This allows to configure precise timing values
when the defaults are inadequate.
Previously the hardware I2C timeout was hard coded to 50ms which isn't
guaranteed to be enough depending on the clock stretching specs of the I2C
device(s) in use.
This patch ensures the hardware I2C implementation honors the existing
timeout argument passed to the machine.I2C constructor. The default
timeout for software and hardware I2C is now 50ms.
For example: i2c.writevto(addr, (buf1, buf2)). This allows to efficiently
(wrt memory) write data composed of separate buffers, such as a command
followed by a large amount of data.
This allows to efficiently send to an I2C slave data that is made up of
more than one buffer. Instead of needing to allocate temporary memory to
combine buffers together this new method allows to pass in a tuple or list
of buffers. The name is based on the POSIX function writev() which has
similar intentions and signature.
The reasons for taking this approach (compared to having an interface with
separate start/write/stop methods) are:
- It's a backwards compatible extension.
- It's convenient for the user.
- It's efficient because there is only one Python call, then the C code can
do everything in one go.
- It's efficient on the I2C bus because the implementation can do
everything in one go without pauses between blocks of bytes.
- It should be possible to implement this extension in all ports, for
hardware and software I2C.
Further discussion is found in issue #3482, PR #4020 and PR #4763.
Recent gcc versions (at least 9.1) give a warning about using "sp" in the
clobber list. Such code is removed by this patch. A dedicated function is
instead used to set SP and branch to the bootloader so the code has full
control over what happens.
Fixes issue #4785.
This also fixes deleting the PPP task, since eTaskGetState() never returns
eDeleted.
A limitation with this patch: once the PPP is deactivated (ppp.active(0))
it cannot be used again. A new PPP instance must be created instead.
The user can now select their own package index by either passing the "-i"
command line option, or setting the upip.index_urls variable (before doing
an install).
The https://micropython.org/pi package index hosts packages from
micropython-lib and will be searched first when installing a package. If a
package is not found here then it will fallback to PyPI.
This allows figuring out the number of bytes in the memoryview object as
len(memview) * memview.itemsize.
The feature is enabled via MICROPY_PY_BUILTINS_MEMORYVIEW_ITEMSIZE and is
disabled by default.
The original code called setsockopt(SO_RCVTIMEO/SO_SNDTIMEO) with NULL
timeout structure argument, which is an illegal usage of that function.
The old code also didn't validate the return value of setsockopt, missing
the bug completely.
Before this change, if the USB was reconnected it was possible that some
characters in the TX buffer were retransmitted because tx_buf_ptr_out and
tx_buf_ptr_out_shadow were reset while tx_buf_ptr_in wasn't. That
behaviour is fixed here by retaining the TX buffer state across reconnects.
Fixes issue #4761.
The new function factory_reset_make_files() populates the given filesystem
with the default factory files. It is defined with weak linkage so it can
be overridden by a board.
This commit also brings some minor user-facing changes:
- boot.py is now no longer created unconditionally if it doesn't exist, it
is now only created when the filesystem is formatted and the other files
are populated (so, before, if the user deleted boot.py it would be
recreated at next boot; now it won't be).
- pybcdc.inf and README.txt are only created if the board has USB, because
they only really make sense if the filesystem is exposed via USB.
It's more common to need non-blocking behaviour when reading from a UART,
rather than having a large timeout like 1000ms (the original behaviour).
With a large timeout it's 1) likely that the function will read forever if
characters keep trickling it; or 2) the function will unnecessarily wait
when characters come sporadically, eg at a REPL prompt.
Prior to this commit, building the unix port with `DEBUG=1` and
`-finstrument-functions` the compilation would fail with an error like
"control reaches end of non-void function". This change fixes this by
removing the problematic "if (0)" branches. Not all branches affect
compilation, but they are all removed for consistency.
- IBK-BLYST-NANO: Breakout board
- IDK-BLYST-NANO: DevKit board with builtin IDAP-M CMSIS-DAP Debug JTAG,
RGB led
- BLUEIO-TAG-EVIM: Sensor tag board (environmental sensor
(T, H, P, Air quality) + 9 axis motion sensor)
Also, the LED module has been updated to support individual base level
configuration of each LED. If set, this will be used instead of the
common configuration, MICROPY_HW_LED_PULLUP. The new configuration,
MICROPY_HW_LEDX_LEVEL, where X is the LED number can be used to set
the base level of the specific LED.
The alternate function pin allocations are different to other NUCLEO-144
boards. This is because the STM32F413 has a very high peripheral count:
10x UART, 5x SPI, 3x I2C, 3x CAN. The pinout was chosen to expose all
these devices on separate pins except CAN3 which shares a pin with UART1
and SPI1 which shares pins with DAC.
Includes:
- Support for CAN3.
- Support for UART9 and UART10.
- stm32f413xg.ld and stm32f413xh.ld linker scripts.
- stm32f413_af.csv alternate function mapping.
- startup_stm32f413xx.s because F413 has different interrupt vector table.
- Memory configuration with: 240K filesystem, 240K heap, 16K stack.
This patch makes pllvalues.py generate two tables: one for when HSI is used
and one for when HSE is used. The correct table is then selected at
compile time via the existing MICROPY_HW_CLK_USE_HSI.
With this change, @micropython.asm_thumb functions will work on standard
ARM processors (that are in ARM state by default), in scripts and
precompiled .mpy files.
Addresses issue #4675.
When building with link time optimization enabled it is possible both
gc_collect() and gc_collect_regs_and_stack() get inlined into gc_alloc()
which can result in the regs variable being pushed on the stack earlier
than some of the registers. Depending on the calling convention, those
registers might however contain pointers to blocks which have just been
allocated in the caller of gc_alloc(). Then those pointers end up higher on
the stack than regs, aren't marked by gc_collect_root() and hence get
sweeped, even though they're still in use.
As reported in #4652 this happened for in 32-bit msvc release builds:
mp_lexer_new() does two consecutive allocations and the latter triggered a
gc_collect() which would sweep the memory of the first allocation again.
Otherwise mp_interrupt_char will have a value of zero on start up (because
it's in the BSS) and a KeyboardInterrupt may be raised during start up.
For example this can occur if there is a UART attached to the REPL which
sends spurious null bytes when the device turns on.
So that boot.py and/or main.py can be frozen (either as STR or MPY) in the
same way that other scripts are frozen. Frozen scripts have preference to
scripts in the VFS.
It consists of:
1. "do_handhake" param (default True) to wrap_socket(). If it's False,
handshake won't be performed by wrap_socket(), as it would be done in
blocking way normally. Instead, SSL socket can be set to non-blocking mode,
and handshake would be performed before the first read/write request (by
just returning EAGAIN to these requests, while instead reading/writing/
processing handshake over the connection). Unfortunately, axTLS doesn't
really support non-blocking handshake correctly. So, while framework for
this is implemented on MicroPython's module side, in case of axTLS, it
won't work reliably.
2. Implementation of .setblocking() method. It must be called on SSL socket
for blocking vs non-blocking operation to be handled correctly (for
example, it's not enough to wrap non-blocking socket with wrap_socket()
call - resulting SSL socket won't be itself non-blocking). Note that
.setblocking() propagates call to the underlying socket object, as
expected.
For this, add wrap_socket(do_handshake=False) param. CPython doesn't have
such a param at a module's global function, and at SSLContext.wrap_socket()
it has do_handshake_on_connect param, but that uselessly long.
Beyond that, make write() handle not just MBEDTLS_ERR_SSL_WANT_WRITE, but
also MBEDTLS_ERR_SSL_WANT_READ, as during handshake, write call may be
actually preempted by need to read next handshake message from peer.
Likewise, for read(). And even after the initial negotiation, situations
like that may happen e.g. with renegotiation. Both
MBEDTLS_ERR_SSL_WANT_READ and MBEDTLS_ERR_SSL_WANT_WRITE are however mapped
to the same None return code. The idea is that if the same read()/write()
method is called repeatedly, the progress will be made step by step anyway.
The caveat is if user wants to add the underlying socket to uselect.poll().
To be reliable, in this case, the socket should be polled for both POLL_IN
and POLL_OUT, as we don't know the actual expected direction. But that's
actually problematic. Consider for example that write() ends with
MBEDTLS_ERR_SSL_WANT_READ, but gets converted to None. We put the
underlying socket on pull using POLL_IN|POLL_OUT but that probably returns
immediately with POLL_OUT, as underlyings socket is writable. We call the
same ussl write() again, which again results in MBEDTLS_ERR_SSL_WANT_READ,
etc. We thus go into busy-loop.
So, the handling in this patch is temporary and needs fixing. But exact way
to fix it is not clear. One way is to provide explicit function for
handshake (CPython has do_handshake()), and let *that* return distinct
codes like WANT_READ/WANT_WRITE. But as mentioned above, past the initial
handshake, such situation may happen again with at least renegotiation. So
apparently, the only robust solution is to return "out of bound" special
sentinels like WANT_READ/WANT_WRITE from read()/write() directly. CPython
throws exceptions for these, but those are expensive to adopt that way for
efficiency-conscious implementation like MicroPython.
The machine.WDT() now accepts the "timeout" keyword argument to select the
WDT interval. And the WDT is changed to panic mode which means it will
reset the device if the interval expires (instead of just printing an error
message).
On the STM32F722 (at least, but STM32F767 is not affected) the CK48MSEL bit
must be deselected before PLLSAION is turned off, or else the 48MHz
peripherals (RNG, SDMMC, USB) may get stuck without a clock source.
In such "lock up" cases it seems that these peripherals are still being
clocked from the PLLSAI even though the CK48MSEL bit is turned off. A hard
reset does not get them out of this stuck state. Enabling the PLLSAI and
then disabling it does get them out. A test case to see this is:
import machine, pyb
for i in range(100):
machine.freq(122_000000)
machine.freq(120_000000)
print(i, [pyb.rng() for _ in range(4)])
On occasion the RNG will just return 0's, but will get fixed again on the
next loop (when PLLSAI is enabled by the change to a SYSCLK of 122MHz).
Fixes issue #4696.
The stm32 and nrf ports already had the behaviour that they would first
check if the script exists before executing it, and this patch makes all
other ports work the same way. This helps when developing apps because
it's hard to tell (when unconditionally trying to execute the scripts) if
the resulting OSError at boot up comes from missing boot.py or main.py, or
from some other error. And it's not really an error if these scripts don't
exist.
Prior to this patch, when a lot of data was output by a running script
pyboard.py would try to capture all of this output into the "data"
variable, which would gradually slow down pyboard.py to the point where it
would have large CPU and memory usage (on the host) and potentially lose
data.
This patch fixes this problem by not accumulating the data in the case that
the data is not needed, which is when "data_consumer" is used.
This patch makes the DAC driver simpler and removes the need for the ST
HAL. As part of it, new helper functions are added to the DMA driver,
which also use direct register access instead of the ST HAL.
Main changes to the DAC interface are:
- The DAC uPy object is no longer allocated dynamically on the heap,
rather it's statically allocated and the same object is retrieved for
subsequent uses of pyb.DAC(<id>). This allows to access the DAC objects
without resetting the DAC peripheral. It also means that the DAC is only
reset if explicitly passed initialisation parameters, like "bits" or
"buffering".
- The DAC.noise() and DAC.triangle() methods now output a signal which is
full scale (previously it was a fraction of the full output voltage).
- The DAC.write_timed() method is fixed so that it continues in the
background when another peripheral (eg SPI) uses the DMA (previously the
DAC would stop if another peripheral finished with the DMA and shut the
DMA peripheral off completely).
Based on the above, the following backwards incompatibilities are
introduced:
- pyb.DAC(id) will now only reset the DAC the first time it is called,
whereas previously each call to create a DAC object would reset the DAC.
To get the old behaviour pass the bits parameter like: pyb.DAC(id, bits).
- DAC.noise() and DAC.triangle() are now full scale. To get previous
behaviour (to change the amplitude and offset) write to the DAC_CR (MAMP
bits) and DAC_DHR12Rx registers manually.
In CPython the random module is seeded differently on each import, and so
this new macro option MICROPY_PY_URANDOM_SEED_INIT_FUNC allows to implement
such a behaviour.
If MICROPY_HW_RTC_USE_BYPASS is enabled the RTC startup goes as follows:
- RTC is started with LSE in bypass mode to begin with
- if that fails to start (after a given timeout) then LSE is reconfigured
in non-bypass
- if that fails to start then RTC is switched to LSI
The qstr window size is not log-2 encoded, it's just the actual number (but
in mpy-tool.py this didn't lead to an error because the size is just used
to truncate the window so it doesn't grow arbitrarily large in memory).
Addresses issue #4635.
When running Linux on WSL, Popen.kill() can raise a ProcessLookupError if
the process does not exist anymore, which can happen here since the
previous statement already tries to close the process by sending Ctrl-D to
the running repl. This doesn't seem to be a problem on other OSes, so just
swallow the exception silently since it indicates the process has been
closed already, which after all is what we want.
Since commit da938a83b5 the tcp_arg() that is
set for the new connection is the new connection itself, and the parent
listening socket is found in the pcb->connected entry.
Use uos.dupterm for REPL configuration of the main USB_VCP(0) stream on
dupterm slot 1, if USB is enabled. This means dupterm can also be used to
disable the boot REPL port if desired, via uos.dupterm(None, 1).
For efficiency this adds a simple hook to the global uos.dupterm code to
work with streams that are known to be native streams.
These macros are unused, and they can conflict with other entities by the
same name. If needed they can be provided as static inline functions, or
just functions.
Fixes issue #4559.
Adds support for hardware i2c to the zephyr port. Similar to other ports
such as stm32 and nrf, we only implement the i2c protocol functions
(readfrom and writeto) and defer memory operations (readfrom_mem,
readfrom_mem_into, and writeto_mem) to the software i2c implementation.
This may need to change in the future because zephyr is considering
deprecating its i2c_transfer function in favor of i2c_write_read; in this
case we would probably want to implement the memory operations directly
using i2c_write_read.
Tested with the accelerometer on frdm_k64f and bbc_microbit boards.
For gpio_hold_en() to work properly (not draw additional current) pull
up/down must be disabled when hold is enabled. This patch makes sure this
is the case by reworking the pull constants to be a bit mask.
Adding section about on how to disable use of the linker flag
-flto, by setting the LTO=0 argument to Make. Also, added a
note on recommended toolchains to use that works well with
LTO enabled.
Removing linker script for nrf52840 s140 v6.0.0 as pca10056
target board now points to the new v6.1.1. Also, removing the
entry from the download_ble_stack.sh script.
Removing linker script for nrf52832 s132 v6.0.0 as all target
boards now points to the new v6.1.1. Also, removing the entry
from the download_ble_stack.sh script.
Previously specifying None as the pull value would leave the pull up/down
state unchanged. This change makes it so -1 leaves the state unchanged and
None makes the pin float, as per the docs.
In this port JavaScript is the underlying "machine" and MicroPython is
transmuted into JavaScript by Emscripten. MicroPython can then run under
Node.js or in the browser.
Functions in these files may be needed when certain features are enabled
(eg dual core mode), even if the linker does not give a warning or error
about unresolved symbols.
This system makes it a lot easier to include external libraries as static,
native modules in MicroPython. Simply pass USER_C_MODULES (like
FROZEN_MPY_DIR) as a make parameter.
During make, makemoduledefs.py parses the current builds c files for
MP_REGISTER_MODULE(module_name, obj_module, enabled_define)
These are used to generate a header with the required entries for
"mp_rom_map_elem_t mp_builtin_module_table[]" in py/objmodule.c
This commit adds support for saving and loading .mpy files that contain
native code (native, viper and inline-asm). A lot of the ground work was
already done for this in the form of removing pointers from generated
native code. The changes here are mainly to link in qstr values to the
native code, and change the format of .mpy files to contain native code
blocks (possibly mixed with bytecode).
A top-level summary:
- @micropython.native, @micropython.viper and @micropython.asm_thumb/
asm_xtensa are now allowed in .py files when compiling to .mpy, and they
work transparently to the user.
- Entire .py files can be compiled to native via mpy-cross -X emit=native
and for the most part the generated .mpy files should work the same as
their bytecode version.
- The .mpy file format is changed to 1) specify in the header if the file
contains native code and if so the architecture (eg x86, ARMV7M, Xtensa);
2) for each function block the kind of code is specified (bytecode,
native, viper, asm).
- When native code is loaded from a .mpy file the native code must be
modified (in place) to link qstr values in, just like bytecode (see
py/persistentcode.c:arch_link_qstr() function).
In addition, this now defines a public, native ABI for dynamically loadable
native code generated by other languages, like C.
The new compile-time option is MICROPY_DEBUG_MP_OBJ_SENTINELS, disabled by
default. This is to allow finer control of whether this debugging feature
is enabled or not (because, for example, this setting must be the same for
mpy-cross and the MicroPython main code when using native code generation).
When encoded in the mpy file, if qstr <= QSTR_LAST_STATIC then store two
bytes: 0, static_qstr_id. Otherwise encode the qstr as usual (either with
string data or a reference into the qstr window).
Reduces mpy file size by about 5%.
Instead of emitting two bytes in the bytecode for where the linked qstr
should be written to, it is now replaced by the actual qstr data, or a
reference into the qstr window.
Reduces mpy file size by about 10%.
This is an implementation of a sliding qstr window used to reduce the
number of qstrs stored in a .mpy file. The window size is configured to 32
entries which takes a fixed 64 bytes (16-bits each) on the C stack when
loading/saving a .mpy file. It allows to remember the most recent 32 qstrs
so they don't need to be stored again in the .mpy file. The qstr window
uses a simple least-recently-used mechanism to discard the least recently
used qstr when the window overflows (similar to dictionary compression).
This scheme only needs a single pass to save/load the .mpy file.
Reduces mpy file size by about 25% with a window size of 32.
POP_BLOCK and POP_EXCEPT are now the same, and are always followed by a
JUMP. So this optimisation reduces code size, and RAM usage of bytecode by
two bytes for each try-except handler.
This patch fixes a bug in the VM when breaking within a try-finally. The
bug has to do with executing a break within the finally block of a
try-finally statement. For example:
def f():
for x in (1,):
print('a', x)
try:
raise Exception
finally:
print(1)
break
print('b', x)
f()
Currently in uPy the above code will print:
a 1
1
1
segmentation fault (core dumped) micropython
Not only is there a seg fault, but the "1" in the finally block is printed
twice. This is because when the VM executes a finally block it doesn't
really know if that block was executed due to a fall-through of the try (no
exception raised), or because an exception is active. In particular, for
nested finallys the VM has no idea which of the nested ones have active
exceptions and which are just fall-throughs. So when a break (or continue)
is executed it tries to unwind all of the finallys, when in fact only some
may be active.
It's questionable whether break (or return or continue) should be allowed
within a finally block, because they implicitly swallow any active
exception, but nevertheless it's allowed by CPython (although almost never
used in the standard library). And uPy should at least not crash in such a
case.
The solution here relies on the fact that exception and finally handlers
always appear in the bytecode after the try body.
Note: there was a similar bug with a return in a finally block, but that
was previously fixed in b735208403
From https://github.com/micropython/oofatfs, branch work-R0.13c,
commit cb05c9486d3b48ffd6bd7542d8dbbab4b1caf790.
Large code pages (932, 936, 949, 950) have been removed from ffunicode.c
because they were not included in previous versions here.
To use HSI instead of HSE define MICROPY_HW_CLK_USE_HSI as 1 in the board
configuration file. The default is to use HSE.
HSI has been made the default for the NUCLEO_F401RE board to serve as an
example, and because early revisions of this board need a hardware
modification to get HSE working.
Also, to make it possible for ports to provide their own lwipopts.h, the
default include directory of extmod/lwip-include is no longer added and
instead a port should now make sure the correct include directory is
included in the list (can still use extmod/lwip-include).
This demonstrates how to use external QSPI flash in XIP (execute in place)
mode. The default configuration has all extmod/ code placed into external
QSPI flash, but other code can easily be put there by modifying the custom
f769_qspi.ld script.
A board can now use the make variables TEXT0_SECTIONS and TEXT1_SECTIONS to
specify the linker sections that should go in its firmware. Defaults are
provided which give the existing behaviour.
This optimisation eliminates the need to create a temporary normal dict.
The optimisation is enabled via MICROPY_COMP_CONST_LITERAL which is enabled
by default (although only has an effect if OrderdDict is enabled).
Thanks to @pfalcon for the initial idea and implementation.
All exceptions that unwind through the async-with must be caught and
BaseException is the top-level class, which includes Exception and others.
Fixes issue #4552.
Some users of this module may require the LwIP stack to run at an elevated
priority, to protect against concurrency issues with processing done by the
underlying network interface. Since LwIP doesn't provide such protection
it must be done here (the other option is to run LwIP in a separate thread,
and use thread protection mechanisms, but that is a more heavyweight
solution).
This is only correct for the extmod/uos_dupterm.c implementation however,
as e.g cc3200 implementation does the mp_load_method() itself, and anyway
requires `read` instead of `readinto`.
esp_wifi_connect will return ESP_OK for the normal path of execution which
just means the reconnect is started, not that it is actually reconnected.
In such a case wifi.isconnected() should return False until the
reconnection is complete. After reconnect a GOT_IP event is called and it
will change wifi_sta_connected back to True.
This patch makes sure that the char_data.props is first
assigned a value before other flags are OR'd in.
Resolves compilation warning on possible unitialized variable.
This patch makes sure that advertisment data is located in
persistent static RAM memory throughout the advertisment.
Also, setting m_adv_handle to predifined
BLE_GAP_ADV_SET_HANDLE_NOT_SET value to indicate first time
usage of the handle. Upon first advertisment configuration
this will be populated with a handle value returned by the
stack (s132/s140).
After new layout of nordicsemi.com the direct links to
command line tools (nrfjprog) has changed to become dynamic.
This patch removes the old direct links to each specific OS
variant and is replaced with one single link to the download
landing page instead.
Currently all usages of mp_hal_pin_config_alt_static() set the pin speed to
"high" (50Mhz). The SDRAM interface typically runs much faster than this
so should be set to the maximum pin speed.
This commit adds mp_hal_pin_config_alt_static_speed() which allows setting
the pin speed along with the other alternate function details.
A few RTC constants weren't being parsed properly due to whitespace
differences, and this patch makes certain whitespace optional. Changes
made:
- allow for no space between /*!< and EXTI, eg for:
__IO uint32_t IMR; /*!<EXTI Interrupt mask register, Address offset: 0x00 */
- allow for no space between semicolon and start of comment, eg for:
__IO uint32_t ALRMASSR;/*!< RTC alarm A sub second register, Address offset: 0x44 */
The bug polling for readability was: if alloc==0 and tcp.item==NULL then
the code would incorrectly check tcp.array[iget] which is an invalid
dereference when alloc==0. This patch refactors the code to use a helper
function lwip_socket_incoming_array() to return the correct pointer for the
incomming connection array.
Fixes issue #4511.
As mentioned in #4450, `websocket` was experimental with a single intended
user, `webrepl`. Therefore, we'll make this change without a weak
link `websocket` -> `uwebsocket`.
If opening of /dev/mem has failed an `OSError` is appropriately raised, but
the next time `mem8/16/32` is accessed the invalid file descriptor is used
and the program gets a SIGSEGV.
Replaces "PYB: soft reboot" with "MPY: soft reboot", etc.
Having a consistent prefix across ports reduces the difference between
ports, which is a general goal. And this change won't break pyboard.py
because that tool only looks for "soft reboot".
This change makes it so that python3 is required by default to build
MicroPython. Python 2 can be used by specifying make PYTHON=python2.
This comes about due to a recent-ish change to PEP 394 that makes the
python command more optional than before (even with Python 2 installed);
see cd59ec03c8 (diff-1d22f7bd72cbc900670f058b1107d426)
Since the command python is no longer required to be provided by a
distribution we need to use either python2 or python3 as commands. And
python3 seems the obvious choice.
These macros could in principle be (inline) functions so it makes sense to
have them lower case, to match the other C API functions.
The remaining macros that are upper case are:
- MP_OBJ_TO_PTR, MP_OBJ_FROM_PTR
- MP_OBJ_NEW_SMALL_INT, MP_OBJ_SMALL_INT_VALUE
- MP_OBJ_NEW_QSTR, MP_OBJ_QSTR_VALUE
- MP_OBJ_FUN_MAKE_SIG
- MP_DECLARE_CONST_xxx
- MP_DEFINE_CONST_xxx
These must remain macros because they are used when defining const data (at
least, MP_OBJ_NEW_SMALL_INT is so it makes sense to have
MP_OBJ_SMALL_INT_VALUE also a macro).
For those macros that have been made lower case, compatibility macros are
provided for the old names so that users do not need to change their code
immediately.
Adds support for 3 Cortex-M boards, selectable via "BOARD" in the Makefile:
- microbit, Cortex-M0 via nRF51822
- netduino2, Cortex-M3 via STM32F205
- mps2-an385, Cortex-M3 via FPGA
netduino2 is the default board because it's supported by older qemu
versions (down to at least 2.5.0).
Instead of checking each callback (currently storage and dma) explicitly
for each SysTick IRQ, use a simple circular function table indexed by the
lower bits of the millisecond tick counter. This allows callbacks to be
easily enabled/disabled at runtime, and scales well to a large number of
callbacks.
Previously crypto-algorithms impl was included even if MICROPY_SSL_MBEDTLS
was in effect, thus we relied on the compiler/linker to cut out the unused
functions.
This is a good board to demonstrate the use of Mboot because it only has a
USB HS port exposed so the native ST DFU mode cannot be used. With Mboot
this port can be used.
If a custom bootloader is enabled (eg mboot) then machine.bootloader() will
now enter that loader. To get the original ST DFU loader pass any argument
to the function, like machine.bootloader(1).
Don't exclude the Timer instance 1 entry from machine_timer_obj[] when
using soft PWM. The usage is already checked when creating the Timer,
so just create an empty entry.
If needed these parameters can be added back and made functional one at a
time. It's better to explicitly not support them than to silently allow
but ignore them.
Python defines warnings as belonging to categories, where category is a
warning type (descending from exception type). This is useful, as e.g.
allows to disable warnings selectively and provide user-defined warning
types. So, implement this in MicroPython, except that categories are
represented just with strings. However, enough hooks are left to implement
categories differently per-port (e.g. as types), without need to patch each
and every usage.
With clock bypass enabled the attached SD card is clocked at the maximum
48MHz. But some SD cards are unreliable at these rates. Although it's
nice to have high speed transfers it's more important that the transfers
are reliable for all cards. So disable this clock bypass option.
This way the UART REPL does not need the MicroPython heap and exists
outside the MicroPython runtime, allowing characters to still be received
during a soft reset.
Auto-detection of the crystal frequency is convenient and allows for a
single binary for many different boards. But it can be unreliable in
certain situations so in production, for a given board, it's recommended to
configure the correct fixed frequency.
Configuration for the build is now specified using sdkconfig rather than
sdkconfig.h, which allows for much easier configuration with defaults from
the ESP IDF automatically applied. sdkconfig.h is generated using the new
ESP IDF kconfig_new tool written in Python. Custom configuration for a
particular ESP32 board can be specified via the make variable SDKCONFIG.
The esp32.common.ld file is also now generated using the standard ESP IDF
ldgen.py tool.
When the ESP IDF builds a project it puts all separate components into
separate .a library archives. And then the esp32.common.ld linker script
references these .a libraries by explicit name to put certain object files
in iRAM.
This patch does a similar thing for the custom build system used here,
putting all IDF .o's into their respective .a. So a custom linker script
is no longer needed.
ISR's no longer need to be in iRAM, and the ESP IDF provides an option to
specify that they are in iRAM if an application needs lower latency when
handling them. But we don't use this feature for user interrupts: both
timer and gpio ISR routines are registered without the ESP_INTR_FLAG_IRAM
option, and so the scheduling code no longer needs to be in iRAM.
The new compile-time option is MICROPY_HW_USB_MAX_POWER_MA. Set this in
the board configuration file to the maximum current in mA that the board
will draw over USB. The default is 500mA.
The new compile-time option is MICROPY_HW_USB_SELF_POWERED. Set this
option to 1 in the board configuration file to indicate that the USB device
is self powered. This option is disabled by default (previous behaviour).
It can be that LSEON and LSERDY are set yet the RTC is not enabled (this
can happen for example when coming out of the ST DFU mode on an F405 with
the RTC not previously initialised). In such a case the RTC is never
started because the code thinks it's already running. This patch fixes
this case by always checking if RTCEN is set when booting up (and also
testing for a valid RTCSEL value in the case of using an LSE).
One can't use pthread calls in a signal handler because they are not
async-signal-safe (see man signal-safety). Instead, sem_post can be used
to post from within a signal handler and this should be more efficient than
using a busy wait loop, waiting on a volatile variable.
This header is deprecated as of mbedtls 2.8.0, as shipped with Ubuntu
18.04. Leads to #warning which is promoted to error with uPy compile
options.
Note that the current version of mbedtls is 2.14 at the time of writing.
The machine.sleep() function can be misleading because it clashes with
time.sleep() which has quite different semantics. So change it to
machine.lightsleep() which shows that it is closer in behaviour to
machine.deepsleep().
Also, add an optional argument to these two sleep functions to specify a
maximum time to sleep for. This is a common operation and underlying
hardware usually has a special way of performing this operation.
The existing machine.sleep() function will remain for backwards
compatibility purposes, and it can simply be an alias for
machine.lightsleep() without arguments. The behaviour will be the same.
If MICROPY_PERSISTENT_CODE_LOAD or MICROPY_ENABLE_COMPILER are enabled then
code gets enabled that calls file reading functions which may be disabled
if no readers have been implemented.
To fix this, introduce a MICROPY_HAS_FILE_READER variable, which is
automatically set if MICROPY_READER_POSIX or MICROPY_READER_VFS is set but
can also be manually set if a custom reader is being implemented. Then
disable the file reading calls if this is not set.
For architectures where size_t is less than 32 bits (eg 16 bits) the args
must be casted to uint32_t so the left shift will work. For architectures
where size_t is greater than 32 bits (eg 64 bits) this new casting will not
lose any bits because the end result must anyway fit in a uint32_t.
This aligns more closely with the hardware, that there are two, fixed HW
SPI peripherals. And it allows to recreate the HW SPI objects without
error, as well as create them again after a soft reset.
Fixes issue #4103.
In order to suit the more common 800KHz by default (instead of 400KHz), and
also have the same behaviour as the esp8266 port.
Resolves#4396.
Note! This is a breaking change. Anyone that has previously used the
NeoPixel class on an ESP32 board may be affected.
The original behaviour of open-drain-high was to use the open-drain mode of
the GPIO pin, and this seems to make driving a DHT more reliable. See
issue #4233.
The ESP IDF system already provides a math library, and that one is likely
to be better tuned to the Xtensa architecture. The IDF components are also
tested against its own math library, so best not to override it. Using the
system provided library also allows to easily switch to double-precision
floating point by changing MICROPY_FLOAT_IMPL to MICROPY_FLOAT_IMPL_DOUBLE.
So that the user can explicitly deactivate UART(0) if needed. See
issue #4314.
This introduces some risk to "brick" the device, if the user disables the
REPL without providing an alternative REPL (eg WebREPL), or any way to
reenable it. In such a case the device needs to be erased and
reprogrammed. This seems unavoidable, given the desire to have the option
to use the UART for something other than the REPL.
Without the static qualifier these objects will be kept by the linker even
if they are unused. So this patch saves some RAM when these features are
unused by a board.
If there are many short reads to a socket in a row (eg by readline) then
releasing and acquiring the GIL each time will give very poor throughput.
So first poll the socket to see if it has data, and if it does then don't
release the GIL.
Otherwise, if multiple threads are active, printing data to the REPL may be
very slow because in some cases only one character is output per call to
mp_hal_stdout_tx_strn.
Changes to the layout of the bytecode header meant that this debug code was
no longer compiling. This is now fixed and a new compile-time option is
introduced, MICROPY_DEBUG_VM_STACK_OVERFLOW, to turn on this feature (which
is disabled by default). This option is needed because more than one file
needs to cooperate to make this check work.
Under python3 (tested with 3.6.7) bytes with a list of integers as an
argument returns a different result than under python 2.7 (tested with
2.7.15rc1) which causes pydfu.py to fail when run under 2.7. Changing
bytes to bytearray makes pydfu work properly under both Python 2.7 and
Python 3.6.
On MCUs other than F4 the ORE (overrun error) flag needs to be cleared
independently of clearing RXNE, even though both are wired to trigger the
same RXNE IRQ. In the case that an overrun occurred it's necessary to
explicitly clear the ORE flag or else the RXNE interrupt will keep firing.
Otherwise IRQs may not be enabled for the user UART.irq() handler. In
particular this fixes the user IRQ_RXIDLE interrupt so that it triggers
even when there is no RX buffer.
It's more robust to have the version defined statically in a header file,
rather than dynamically generating it via git using a git tag. In case
git doesn't exist, or a different source control tool is used, it's
important to still have the uPy version number available.
The new option MICROPY_HW_SDCARD_MOUNT_AT_BOOT can now be defined to 0 in
mpconfigboard.h to allow SD hardware to be enabled but not auto-mounted at
boot. This feature is enabled by default to retain previous behaviour.
Previously, if an SD card is enabled in hardware it is also used to boot
from. While this can be disabled with a SKIPSD file on internal flash,
this wont be available at first boot or if the internal flash gets
corrupted.
The older "bool has_finaliser" gets recast as GC_ALLOC_FLAG_HAS_FINALISER=1
so this is a backwards compatible change to the signature. Since bool gets
implicitly converted to 1 this patch doesn't include conversion of all
calls.
Both mp_type_array and mp_type_memoryview use the same object structure,
mp_obj_array_t, but for the case of memoryview, some fields, e.g. "free",
have different meaning. As the "free" field is also a bitfield, assume
that (anonymous) union can't be used here (for the concerns of possible
compatibility issues with wide array of toolchains), and just add a field
alias using a #define. As it's a define, it should be a selective
identifier, so use verbose "memview_offset" to avoid any clashes.
If you happen to only have a really simple frozen file that doesn't contain
any new qstrs then the generated frozen_mpy.c file contains an empty
enumeration which causes a C compile time error.
Following an equivalent fix to py/bc.c. The reason the incorrect values
for the opcode constants were not previously causing a bug is because they
were never being used: these opcodes always have qstr arguments so the part
of the code that was comparing them would never be reached.
Thanks to @malinah for finding the problem and providing the initial patch.
All 4 opcodes that can have caching bytes also have qstrs, so the test for
them must go in the qstr part of the code. The reason this incorrect
calculation of the opcode size did not lead to a bug is because the caching
byte is at the end of the opcode (byte, qstr, qstr, cache) and is always
0x00 when saving/loading, so was just treated as a single byte no-op
opcode. Hence these opcodes were being saved/loaded/decoded correctly.
Thanks to @malinah for finding the problem and providing the initial patch.
Due to new webpages at nordicsemi.com, the download links
for Bluetooth LE stacks were broken.
This patch updates the links to new locations for the current
targets.
This UART_HandleTypeDef is quite large (around 70 bytes in RAM needed for
each UART object) and is not needed: instead the state of the peripheral
held in its registers provides all the required information.
mp_obj_new_exception_msg() assumes that the message passed to it is in ROM
and so can use its data directly to create the string object for the
argument of the exception, saving RAM. At the same time, this approach
also makes sure that there is no attempt to format the message with printf,
which could lead to faults if the message contained % characters.
Fixes issue #3004.
SHORT, INT, LONG, LONGLONG, and unsigned (U*) variants are being defined.
This is done at compile using GCC-style predefined macros like
__SIZEOF_INT__. If the compiler doesn't have such defines, no such types
will be defined.
Instead of assuming that the method is a bytecode object, and only
supporting load of __name__, make the operation generic by delegating the
load to the method object itself. Saves a bit of code size and fixes the
case of attempting to load __name__ on a native method, see issue #4028.
The pin alternate function information is derived from ST's datasheet
https://www.st.com/resource/en/datasheet/stm32l432kc.pdf
In the datasheet, the line 2 of AF4 includes I2C2 but actually the chip
does not have I2C2 so it is removed.
As per the machine.UART documentation, this is used to set the length of
the RX buffer. The legacy read_buf_len argument is retained for backwards
compatibility, with rxbuf overriding it if provided.
Also change the order of printing of flow so it is after stop (so bits,
parity, stop are one after the other), and reduce code size by using
mp_print_str instead of mp_printf where possible.
See issue #1981.
A new option MICROPY_GC_STACK_ENTRY_TYPE is added to select a custom type
instead of size_t for the gc_stack array items. This can be beneficial for
small devices, especially those that are low on memory anyway. If a device
has 1MB or less of heap (and 16-byte GC blocks) then this type can be
uint16_t, saving 128 bytes of RAM.
The recent implementation of the listen backlog meant that the logic to
test for readability of such a socket changed, and this commit updates the
logic to work again.
Array to hold waiting connections is in-place if backlog=1, else is a
dynamically allocated array. Incoming connections are processed FIFO
style to maintain fairness.
This is necessary for two reasons: 1) FreeRTOS still needs the TCB data
structure even after vPortCleanUpTCB has been called, so this latter hook
function cannot free the TCB, and there is no where else to safely delete
it (this behaviour has changed recently in the ESP IDF); 2) when using
external SPI RAM the uPy heap is in this external memory but the task stack
must be allocated from internal SRAM.
Fixes issue #3904.
A DFU device must be in the idle state before it can be programmed, and
this requires either clearing the status or aborting, depending on its
current state. Code is added to do this. And the USB transfer size is now
automatically detected so devices with a size less than 2048 bytes work
correctly.
We standardized to provide uos.remove() as a more obvious and user-friendly
name. That's what written in the docs. The Unix port implementation
predates this convention, so update it now.
There was an assumption that all names in a module dict are qstr's.
However, they can be dynamically generated (by assigning to globals()),
and in case of a long name, it won't be a qstr. Handle this situation
properly, including taking care of not creating superfluous qstr's for
names starting with "_" (which aren't imported by "import *").
CPython does not have an implementation of select.poll() on some
operating systems (Windows, OSX depending on version) so skip the
test in those cases instead of failing it.
Taking the address of a local variable is mildly expensive, in code size
and stack usage. So optimise scope_find_or_add_id() to not need to take a
pointer to the "added" variable, and instead take the kind to use for newly
added identifiers.
This ensures that implicit variables are only converted to implicit
closed-over variables (nonlocals) at the very end of the function scope.
If variables are closed-over when first used (read from, as was done prior
to this commit) then this can be incorrect because the variable may be
assigned to later on in the function which means they are just a plain
local, not closed over.
Fixes issue #4272.
The way it was written previously the variable x was not an implicit
nonlocal, it was just a normal local (but the compiler has a bug which
incorrectly makes it a nonlocal).
Building axtls gives a lot of warnings with -Wall enabled, and explicitly
disabling all of them cannot be done in a way compatible with gcc and
clang, and likely other compilers. So just use -Wno-all to prevent all of
the extra warnings (in addition to the necessary -Wno-unused-parameter,
-Wno-uninitialized, -Wno-sign-compare and -Wno-old-style-definition).
Fixes issue #4182.
1. Use uctypes.bytearray_at().
Implementation of the "ffi" module predates that of "uctypes", so
initially some convenience functions to access memory were added
to ffi. Later, they landed in uctypes (which follows CPython's
ctype module).
So, replace undocumented experimental functions from ffi to
documented ones from uctypes.
2. Use more suitable type codes for arguments (e.g. "P" (const void*)
instead of "p" (void*).
3. Some better var naming.
4. Clarify some messages printed by the example.
Examples are added to the beginning of the module docs, similarly to docs
for many other modules.
Improvements to grammar, style, and clarity. Some paragraphs are updated
with better suggestions. A warning added of the effect incorrect usage of
the module may have. Describe the fact that offset range used in one
defined structure is limited.
sizeof() can work in two ways: a) calculate size of already instantiated
structure ("sizeof variable") - in this case we already no layout; b) size
of structure decsription ("sizeof type"). In the latter case, LAYOUT_NATIVE
was assumed, but there should possibility to calculate size for other
layouts too. So, with this patch, there're now 2 forms:
uctypes.sizeof(struct)
uctypes.sizeof(struct_desc, layout)
Configurable via MICROPY_MODULE_GETATTR, disabled by default. Among other
things __getattr__ for modules can help to build lazy loading / code
unloading at runtime.
Configurable via MICROPY_PY_BUILTINS_STR_COUNT. Default is enabled.
Disabled for bare-arm, minimal, unix-minimal and zephyr ports. Disabling
it saves 408 bytes on x86.
Some Python linters don't like unconditional except clauses because they
catch SystemExit and KeyboardInterrupt, which usually is not the intended
behaviour.
Part of this test was trying to test some functionality of __getattribute__
but this method name was misspelt so it wasn't doing anything useful.
Fixing the typo in this name makes the test fail because MicroPython
doesn't support user defined __getattribute__ methods. So this part of the
test is removed. The remaining tests are modified slightly to make it
clearer what they are testing.
1. Return correct error code for non-blocking vs timed out socket
(POSIX returns EAGAIN for both, we want ETIMEDOUT in case of timed
out socket). To achieve this, blocking/non-blocking flag is added
to the mp_obj_socket_t, to avoid issuing fcntl() syscall each time
EAGAIN occurs. (mp_obj_socket_t used to be 8 bytes, having some room
in a standard 16-byte alloc block.)
2. Handle socket.settimeout(0) properly - in Python, that means
non-blocking mode, but SO_RCVTIMEO/SO_SNDTIMEO of 0 is infinite
timeout.
3. Overall, make sure that socket.settimeout() call switches blocking
state as expected.
Prior to this commit the USB CDC used the USB start-of-frame (SOF) IRQ to
regularly check if buffered data needed to be sent out to the USB host.
This wasted resources (CPU, power) if no data needed to be sent.
This commit changes how the USB CDC transmits buffered data:
- When new data is first available to send the data is queued immediately
on the USB IN endpoint, ready to be sent as soon as possible.
- Subsequent additions to the buffer (via usbd_cdc_try_tx()) will wait.
- When the low-level USB driver has finished sending out the data queued
in the USB IN endpoint it calls usbd_cdc_tx_ready() which immediately
queues any outstanding data, waiting for the next IN frame.
The benefits on this new approach are:
- SOF IRQ does not need to run continuously so device has a better chance
to sleep for longer, and be more responsive to other IRQs.
- Because SOF IRQ is off, current consumption is reduced by a small amount,
roughly 200uA when USB is connected (measured on PYBv1.0).
- CDC tx throughput (USB IN) on PYBv1.0 is about 2.3 faster (USB OUT is
unchanged).
- When USB is connected, Python code that is executing is slightly faster
because SOF IRQ no longer interrupts continuously.
- On F733 with USB HS, CDC tx throughput is about the same as prior to this
commit.
- On F733 with USB HS, Python code is about 5% faster because of no SOF.
As part of this refactor, the serial port should no longer echo initial
characters when the serial port is first opened (this only used to happen
rarely on USB FS, but on USB HS is was more evident).
So these constant objects can be loaded by dereferencing the REG_FUN_TABLE
pointer instead of loading immediate values. This reduces the size of
generated native code (when such constants are used), and means that
pointers to these constants are no longer stored in the assembly code.
Otherwise there is really nothing that can be done, it can't be unlocked by
the user because there is no way to allocate memory to execute the unlock.
See issue #4205 and #4209.
The maximum index into mp_fun_table is currently less than 1024 and should
stay that way to keep things efficient for all architectures, so there is
no need to handle loading the pointer directly via a literal in this
function.
All architectures now have a dedicated register to hold the pointer to the
native function table mp_fun_table, and so they all need to load this
register at the start of the native function. This commit makes the
loading of this register uniform across architectures by passing the
pointer in the constant table for the native function, and then loading the
register from the constant table. Doing it this way means that the pointer
is not stored in the assembly code, helping to make the code more portable.
Instead of storing the function pointer directly in the assembly code.
This makes the generated code more independent of the runtime (so easier to
relocate the code), and reduces the generated code size.
The esp register is always a fixed distance below ebp, and using esp to
reference locals on the stack frees up the ebp register for general purpose
use (which is important for an architecture with only 8 user registers).
Instead of storing the function pointer directly in the assembly code.
This makes the generated code more independent of the runtime (so easier to
relocate the code), and reduces the generated code size.
The rsp register is always a fixed distance below rbp, and using rsp to
reference locals on the stack frees up the rbp register for general purpose
use.
For s132 and s140, GAP_ADV_MAX_SIZE was currently set to
BLE_GATT_ATT_MTU_DEFAULT, which is 23. The correct value
should have been 31, but there are no define for this in
the s132/s140 header files as for s110.
Updating define in ble_drv.c to the correct value of 31.
The macros are MICROPY_HEAP_START and MICROPY_HEAP_END, and if not defined
by a board then the default values will be used (maximum heap from SRAM as
defined by linker symbols).
As part of this commit the SDRAM initialisation is moved to much earlier in
main() to potentially make it available to other peripherals and avoid
re-initialisation on soft-reboot. On boards with SDRAM enabled the heap
has been set to use that.
Add some more POSIX compatibility by adding a d_type field to the
dirent structure and defining corresponding macros so listdir_next
in the unix' port modos.c can use it, end result being uos.ilistdir
now reports the file type.
This value is unused. It was an artifact of early draft design, but
bitfields were optimized to use scalar one-word encoding, to allow
compact encoding of typical multiple bitfields in MCU control
registers.
This test doesn't check the actual I/O behavior, just "static" invariants
like behavior on duplicate calls or calls when I/O object is not registered
with poller.
With this commit there is now only one entry point into the whole
documentation, which describes the general MicroPython language, and then
from there there are links to information about specific platforms/ports.
This commit doesn't change content (almost, it does fix a few internal
links), it just reorganises things.
This commit adds first class support for yield and yield-from in the native
emitter, including send and throw support, and yields enclosed in exception
handlers (which requires pulling down the NLR stack before yielding, then
rebuilding it when resuming).
This has been fully tested and is working on unix x86 and x86-64, and
stm32. Also basic tests have been done with the esp8266 port. Performance
of existing native code is unchanged.
The nlr_buf_t doesn't need to be part of the Python value stack (as it was
before this commit), it's simpler to have it separated as auxiliary state
that lives on the C stack. This will help adding yield support because in
that case the nlr_buf_t and Python value stack live in separate memory
areas (C stack and heap respectively).
Instead of at end of state, n_state - 1. It was originally (way back in
v1.0) put at the end of the state because the VM didn't have a pointer to
the start. But now that the VM takes a mp_code_state_t pointer it does
have a pointer to the start of the state so can put the exception object
there.
This commit saves about 30 bytes of code on all architectures, and, more
importantly, reduces C-stack usage by a couple of words (8 bytes on Thumb2
and 16 bytes on x86-64) for every (non-generator) call of a bytecode
function because fun_bc_call no longer needs to remember the n_state
variable.
This makes these special methods have the same calling behaviour as other
methods in a class instance (mp_convert_member_lookup() is already called
by mp_obj_class_lookup()).
And remove related comment about needing such protection when calling send.
Reasoning for removal is as follows:
- mp_resume is only called by the VM in YIELD_FROM opcode
- if send_value != MP_OBJ_NULL then throw_value == MP_OBJ_NULL
- so if __next__ or send are called then throw_value == MP_OBJ_NULL
- if __next__ or send raise an exception without nlr protection then the
exception will be handled by the global exception handler of the VM
- this handler already has code to handle exceptions raised in YIELD_FROM,
including correct handling of StopIteration
- this handler doesn't handle the case of injection of GeneratorExit, but
this won't be needed because throw_value == MP_OBJ_NULL
Note that it's already possible for mp_resume() to raise an exception
(including StopIteration) from the unprotected call to type->iternext(), so
that's why the VM already has code to handle the case of exceptions coming
out of mp_resume().
This commit reduces code size by a bit, and significantly reduces C stack
usage when using yield-from, from 88 bytes down to 40 for Thumb2, and 152
down to 72 bytes for x86-64 (better than half). (Note that gcc doesn't
seem to tail-call optimise the call from mp_resume() to mp_obj_gen_resume()
so this saving in C stack usage helps all uses of yield-from.)
mp_make_raise_obj must be used to convert a possible exception type to an
instance object, otherwise the VM may raise a non-exception object.
An existing test is adjusted to test this case, with the original test
already moved to generator_throw.py.
This matches how bytecode does it, and matches the signature of
mp_emit_glue_assign_native. Since the native emitter doesn't support
nan-boxing uintptr_t and mp_uint_t are anyway the same bit-width.
After the previous commit this macro is no longer needed by the native
emitter because live heap pointers are no longer stored in generated native
machine code.
This commit changes native code to handle constant objects like bytecode:
instead of storing the pointers inside the native code they are now stored
in a separate constant table (such pointers include objects like bignum,
bytes, and raw code for nested functions). This removes the need for the
GC to scan native code for root pointers, and takes a step towards making
native code independent of the runtime (eg so it can be compiled offline by
mpy-cross).
Note that the changes to the struct scope_t did not increase its size: on a
32-bit architecture it is still 48 bytes, and on a 64-bit architecture it
decreased from 80 to 72 bytes.
All concrete network classes are now moved to their own file (eg
network.WLAN.rst) and deconditionalised (remove ..only:: directives). This
makes the network documentation the same for all ports. After this change
there are no more "..only::" directives for different ports, and the only
difference among ports is the very front page of the docs.
Nan and inf (signed and unsigned) are also handled correctly by using
signbit (they were also handled correctly with "val<0", but that didn't
handle -0.0 correctly). A test case is added for this behaviour.
When obj.h is compiled as C++ code, the cl compiler emits a warning about
possibly unsafe mixing of size_t and bool types in the or operation in
MP_OBJ_FUN_MAKE_SIG. Similarly there's an implicit narrowing integer
conversion in runtime.h. This commit fixes this by being explicit.
This is an improvement over previous behavior when str was returned for
both str and bytes input format. This new behaviour is also consistent
with how the % operator works, as well as many other str/bytes methods.
It should be noted that it's not how current versions of CPython work,
where there's a gap in the functionality and bytes.format() is not
supported.
This will allow to e.g. implement HTTP Digest authentication.
Adds 540 bytes for x86_32, 332 for arm_thumb2 (for Unix port, which already
includes axTLS library).
This commit adds the math.factorial function in two variants:
- squared difference, which is faster than the naive version, relatively
compact, and non-recursive;
- a mildly optimised recursive version, faster than the above one.
There are some more optimisations that could be done, but they tend to take
more code, and more storage space. The recursive version seems like a
sensible compromise.
The new function is disabled by default, and uses the non-optimised version
by default if it is enabled. The options are MICROPY_PY_MATH_FACTORIAL
and MICROPY_OPT_MATH_FACTORIAL.
Configuring clocks is a critical operation and is best to avoid when
possible. If the clocks really need to be reset to the same values then
one can pass in a slightly higher value, eg 168000001 Hz to get 168MHz.
This ensures that on first boot the most optimal settings are used for the
voltage scaling and flash latency (for F7 MCUs).
This commit also provides more fine-grained control for the flash latency
settings.
Power and clock control is low-level functionality and it makes sense to
have it in a dedicated file, at least so it can be reused by other parts of
the code.
On F7s PLLSAI is used as a 48MHz clock source if the main PLL cannot
provide such a frequency, and on L4s PLLSAI1 is always used as a clock
source for the peripherals. This commit makes sure these PLLs are
re-enabled upon waking from stop mode so the peripherals work.
See issues #4022 and #4178 (L4 specific).
There appears to be an issue on Windows with CPython >= 3.6,
sys.stdout.flush() raises an exception:
OSError: [WinError 87] The parameter is incorrect
It works fine to just catch and ignore the error on the flush line. Tested
on Windows 10 x64 1803 (Build 17134.228), Python 3.6.4 amd64.
This patches avoids multiplying with negative powers-of-10 when parsing
floating-point values, when those powers-of-10 can be exactly represented
as a positive power. When represented as a positive power and used to
divide, the resulting float will not have any rounding errors.
The issue is that mp_parse_num_decimal will sometimes not give the closest
floating representation of the input string. Eg for "0.3", which can't be
represented exactly in floating point, mp_parse_num_decimal gives a
slightly high (by 1LSB) result. This is because it computes the answer as
3 * 0.1, and since 0.1 also can't be represented exactly, multiplying by 3
multiplies up the rounding error in the 0.1. Computing it as 3 / 10, as
now done by the change in this commit, gives an answer which is as close to
the true value of "0.3" as possible.
Changes made:
- make use of MP_OBJ_TO_PTR and MP_OBJ_FROM_PTR where necessary
- fix shadowing of index variable i, renamed to j
- fix type of above variable to size_t to prevent comparison warning
- fix shadowing of res variable
- use "(void)" instead of "()" for functions that take no arguments
This commit implements PEP479 which disallows raising StopIteration inside
a generator to signal that it should be finished. Instead, the generator
should simply return when it is complete.
See https://www.python.org/dev/peps/pep-0479/ for details.
This part is functionally similar to STM32F767xx (they share a datasheet)
so support is generally comparable. When adding board support the
stm32f767_af.csv and stm32f767.ld should be used.
In 0e80f345f8 the inplace operations __iadd__
and __isub__ were made unconditionally available, so the comment about this
section is changed to reflect that.
Loading a pointer by indexing into the native function table mp_fun_table,
rather than loading an immediate value (via a PC-relative load), uses less
code space.
This commit makes viper functions have the same signature as native
functions, at the level of the emitter/assembler. This means that viper
functions can now be wrapped in the same uPy object as native functions.
Viper functions are now responsible for parsing their arguments (before it
was done by the runtime), and this makes calling them more efficient (in
most cases) because the viper entry code can be custom generated to suit
the signature of the function.
This change also opens the way forward for viper functions to take
arbitrary numbers of arguments, and for them to handle globals correctly,
among other things.
Now that the compiler can store the results of the viper types in the
scope, the viper parameter annotation compilation stage can be merged with
the normal parameter compilation stage.
With 5 arguments to mp_arg_check_num(), some architectures need to pass
values on the stack. So compressing n_args_min, n_args_max, takes_kw into
a single word and passing only 3 arguments makes the call more efficient,
because almost all calls to this function pass in constant values. Code
size is also reduced by a decent amount:
bare-arm: -116
minimal x86: -64
unix x64: -256
unix nanbox: -112
stm32: -324
cc3200: -192
esp8266: -192
esp32: -144
If DTTOIF() macro is not defined, the code refers to MP_S_IFDIR, etc.
symbols defined in extmod/vfs.h, so should include it.
This fixes build for Android.
Prior to this commit a function compiled with the native decorator
@micropython.native would not work correctly when accessing global
variables, because the globals dict was not being set upon function entry.
This commit fixes this problem by, upon function entry, setting as the
current globals dict the globals dict context the function was defined
within, as per normal Python semantics, and as bytecode does. Upon
function exit the original globals dict is restored.
In order to restore the globals dict when an exception is raised the native
function must guard its internals with an nlr_push/nlr_pop pair. Because
this push/pop is relatively expensive, in both C stack usage for the
nlr_buf_t and CPU execution time, the implementation here optimises things
as much as possible. First, the compiler keeps track of whether a function
even needs to access global variables. Using this information the native
emitter then generates three different kinds of code:
1. no globals used, no exception handlers: no nlr handling code and no
setting of the globals dict.
2. globals used, no exception handlers: an nlr_buf_t is allocated on the
C stack but it is not used if the globals dict is unchanged, saving
execution time because nlr_push/nlr_pop don't need to run.
3. function has exception handlers, may use globals: an nlr_buf_t is
allocated and nlr_push/nlr_pop are always called.
In the end, native functions that don't access globals and don't have
exception handlers will run more efficiently than those that do.
Fixes issue #1573.
The HAL DMA functions enable SDMMC interrupts before fully resetting the
peripheral, and this can lead to a DTIMEOUT IRQ during the initialisation
of the DMA transfer, which then clears out the DMA state and leads to the
read/write not working at all. The DTIMEOUT is there from previous SDMMC
DMA transfers, even those that succeeded, and is of duration ~180 seconds,
which is 0xffffffff / 24MHz (default DTIMER value, and clock of
peripheral).
To work around this issue, fully reset the SDMMC peripheral before calling
the HAL SD DMA functions.
Fixes issue #4110.
Since mbedtls 2.7.0 new digest functions were introduced with a "_ret"
suffix to allow the functions to return an error message (eg, if the
underlying hardware acceleration failed). These new functions must be used
instead of the old ones to prevent deprecation warnings, or link errors for
missing functions, depending on the mbedtls configuration.
The flash-IRQ handler is used to flush the storage cache, ie write
outstanding block data from RAM to flash. This is triggered by a timeout,
or by a direct call to flush all storage caches.
Prior to this commit, a timeout could trigger the cache flushing to occur
during the execution of a read/write to external SPI flash storage. In
such a case the storage subsystem would break down.
SPI storage transfers are already protected against USB IRQs, so by
changing the priority of the flash IRQ to that of the USB IRQ (what is
done in this commit) the SPI transfers can be protected against any
timeouts triggering a cache flush (the cache flush would be postponed until
after the transfer finished, but note that in the case of SPI writes the
timeout is rescheduled after the transfer finishes).
The handling of internal flash sync'ing needs to be changed to directly
call flash_bdev_irq_handler() sync may be called with the IRQ priority
already raised (eg when called from a USB MSC IRQ handler).
MCUs that have a PLLSAI can use it to generate a 48MHz clock for USB, SDIO
and RNG peripherals. In such cases the SYSCLK is not restricted to values
that allow the system PLL to generate 48MHz, but can be any frequency.
This patch allows such configurability for F7 MCUs, allowing the SYSCLK to
be set in 2MHz increments via machine.freq(). PLLSAI will only be enabled
if needed, and consumes about 1mA extra. This fine grained control of
frequency is useful to get accurate SPI baudrates, for example.
If bytearray is constructed from str, a second argument of encoding is
required (in CPython), and third arg of Unicode error handling is allowed,
e.g.:
bytearray("str", "utf-8", "strict")
This is similar to bytes:
bytes("str", "utf-8", "strict")
This patch just allows to pass 2nd/3rd arguments to bytearray, but
doesn't try to validate them to not impact code size. (This is also
similar to how bytes constructor is handled, though it does a bit
more validation, e.g. check that in case of str arg, encoding argument
is passed.)
This removes the need for a separate axtls build stage, and builds all
axtls object files along with other code. This simplifies and cleans up
the build process, automatically builds axtls when needed, and puts the
axtls object files in the correct $(BUILD) location.
The MicroPython axtls configuration file is provided in
extmod/axtls-include/config.h
A recent version of arm-none-eabi-gcc (8.2.0) will warn about unused packed
attributes in USB_WritePacket and USB_ReadPacket. This patch suppresses
such warnings for this file only.
This patch adds full support for unwinding jumps to the native emitter.
This means that return/break/continue can be used in try-except,
try-finally and with statements. For code that doesn't use unwinding jumps
there is almost no overhead added to the generated code.
The native emitter keeps the current exception in a slot in its C stack
(instead of on its Python value stack), so when it catches an exception it
must explicitly clear that slot so the same exception is not reraised later
on.
Back in 8047340d75 basic support was added in
the VM to handle return statements within a finally block. But it didn't
cover all cases, in particular when some finally's were active and others
inactive when the "return" was executed.
This patch adds further support for return-within-finally by correctly
managing the currently_in_except_block flag, and should fix all cases. The
main point is that finally handlers remain on the exception stack even if
they are active (currently being executed), and the unwind return code
should only execute those finally's which are inactive.
New tests are added for the cases which now pass.
PEP479 (see https://www.python.org/dev/peps/pep-0479/) prohibited raising
StopIteration from within a generator (it is turned into a RuntimeError).
This behaviour was introduced in Python 3.5 and in 3.7 was made compulsory.
Until uPy implements PEP479, this patch adds .py.exp files for the relevant
tests so they can be run under Python 3.7.
In Python 3.7 the behaviour of repr() of an exception with one argument
changed: it no longer prints a trailing comma in the argument list. See
https://bugs.python.org/issue30399
This patch modifies tests that rely on this behaviour to not rely on it.
And the python34.py test is updated to include a test for this behaviour
with a .exp file.
Prior to this patch, native code would use a full nlr_buf_t for each
exception handler (try-except, try-finally, with). For nested exception
handlers this would use a lot of C stack and be rather inefficient.
This patch changes how exceptions are handled in native code by setting up
only a single nlr_buf_t context for the entire function, and then manages a
state machine (using the PC) to work out which exception handler to run
when an exception is raised by an nlr_jump. This keeps the C stack usage
at a constant level regardless of the depth of Python exception blocks.
The patch also fixes an existing bug when local variables are written to
within an exception handler, then their value was incorrectly restored if
an exception was raised (since the nlr_jump would restore register values,
back to the point of the nlr_push).
And it also gets nested try-finally+with working with the viper emitter.
Broadly speaking, efficiency of executing native code that doesn't use
any exception blocks is unchanged, and emitted code size is only slightly
increased for such function. C stack usage of all native functions is
either equal or less than before. Emitted code size for native functions
that use exception blocks is increased by roughly 10% (due in part to
fixing of above-mentioned bugs).
But, most importantly, this patch allows to implement more Python features
in native code, like unwind jumps and yielding from within nested exception
blocks.
These POSIX wrappers are assumed to be passed a concrete stream object so
it is more efficient (eg on nan-boxing builds) to pass in the pointer
rather than mp_obj_t, because then the users of these functions only need
to store a void* (and mp_obj_t may be wider than a pointer). And things
would be further improved if the stream protocol functions eventually took
a pointer as their first argument (instead of an mp_obj_t).
This patch is a step to getting ussl/axtls compiling on nan-boxing builds.
See issue #3085.
mpy-cross is a host, not target binary. It should not be build with the
target compiler, compiler options and other settings. For example,
If someone currently tries to build from pristine checkout the unix port
with the following command:
make CROSS_COMPILE=arm-linux-gnueabihf-
then mpy-cross will be built with arm-linux-gnueabihf-gcc and of course
won't run on the host, leading to overall build failure.
This situation was worked around for some options in 1d8c3f4cff, so add
MICROPY_FORCE_32BIT and CROSS_COMPILE to that set too.
The aim here is to have spi.c contain the low-level SPI driver which is
independent (not fully but close) of MicroPython objects and methods, and
the higher-level bindings are separated out to pyb_spi.c and machine_spi.c.
Among other things, this requires putting bootloader object files in to
their relevant .a archive, so that they can be correctly referenced by the
ESP IDF's linker script.
Otherwise there is the possibility that n_free starts out non-zero from the
previous iteration, which may have found a few (but not enough) free blocks
at the end of the heap. If this is the case, and if the very first blocks
that are scanned the second time around (starting at
gc_last_free_atb_index) are found to give enough memory (including the
blocks at the end of the heap from the previous iteration that left n_free
non-zero) then memory will be allocated starting before the location that
gc_last_free_atb_index points to, most likely leading to corruption.
This serious bug did not manifest itself in the past because a gc_collect
always resets gc_last_free_atb_index to point to the start of the GC heap,
and the first block there is almost always allocated to a long-lived
object (eg entries from sys.path, or mounted filesystem objects), which
means that n_free would be reset at the start of the search loop.
But with threading enabled with the GIL disabled it is possible to trigger
the bug via the following sequence of events:
1. Thread A runs gc_alloc, fails to find enough memory, and has a non-zero
n_free at the end of the search.
2. Thread A calls gc_collect and frees a bunch of blocks on the GC heap.
3. Just after gc_collect finishes in thread A, thread B takes gc_mutex and
does an allocation, moving gc_last_free_atb_index to point to the
interior of the heap, to a place where there is most likely a run of
available blocks.
4. Thread A regains gc_mutex and does its second search for free memory,
starting with a non-zero n_free. Since it's likely that the first block
it searches is available it will allocate memory which overlaps with the
memory before gc_last_free_atb_index.
- Allow configuration by a board of autorefresh number and burst length.
- Increase MPU region size to 8MiB.
- Make SDRAM region cacheable and executable.
"coverage" build uses different BUILD directory comparing to the normal
build. Previously, any build picked up libaxtls.a from normal build's
directory, but that was fixed recently. So, for each build, we must
build axtls explicitly.
This fixes Travis build in particular.
Use overrideable properties instead of hardcoding the use of the
default cl executable used by msvc toolsets. This allows using
arbitrary compiler commands for qstr header generation.
The CLToolExe and CLToolPath properties are used because they are,
even though absent from any official documentation, the de-facto
standard as used by the msvc toolsets themselves.
Without this patch, on 64-bit architectures the "1 << (small_int_bits - 1)"
is computed using only 32-bit values (since small_int_bits is a uint8_t)
and so will overflow (and give the wrong result) if small_int_bits is
larger than 32.
Requesting a baudrate of X should never configure the peripheral to have a
baudrate greater than X because connected hardware may not be able to
handle higher speeds. This patch makes sure to round the prescaler up so
that the actual baudrate is rounded down.
Input files like basics/string_format.py and float/string_format.py have
the same basename so using that name for writing the output (.exp and .out
files) when both tests fail, results in the output of the first one being
overwritten.
Avoid this by using unique names for the output, replacing path characters
with underscores.
There is no need to have three copies of the exception object on the top of
the native value stack. Instead, the values on the stack should be the
first two items in an nlr_buf_t: the prev pointer and the ret_val pointer.
This is all that is needed and is what the rest of the native emitter
expects is on the stack.
This patch is essentially an optimisation. Behaviour is unchanged,
although the stack layout for native exception handling now makes more
sense.
A native function allocates space on its C stack for mp_code_state_t,
followed by its Python stack, then its locals. This patch makes sure that
the native function actually starts at the start of its Python stack,
rather than at the start of mp_code_state_t (which didn't lead to any
issues so far because the mp_code_state_t is unused after the native
function sets itself up).
On x86 archs (both 32 and 64 bit) a bool return value only sets the 8-bit
al register, and the higher bits of the ax register have an undefined
value. When testing the return value of such cases it is required to just
test al for zero/non-zero. On the other hand, checking for truth or
zero/non-zero on an integer return value requires checking all bits of the
register. These two cases must be distinguished and handled correctly in
generated native code. This patch makes sure of this.
For other supported native archs (ARM, Thumb2, Xtensa) there is no such
distinction and this patch does not change anything for them.
The Python documentation recommends to pass the command as a string when
using Popen(..., shell=True). This is because "sh -c <string>" is used to
execute the command and additional arguments after the command string are
passed to the shell itself (not the executing command).
https://docs.python.org/3.5/library/subprocess.html#subprocess.Popen
Prior to this patch, if VBAT was read via ADC.read() or
ADCAll.read_channel(), then it would remain enabled and subsequent reads
of TEMPSENSOR or VREFINT would not work. This patch makes sure that VBAT
is disabled for all cases that it could be read.
DEBUG_printf and MICROPY_DEBUG_PRINTER is now used instead of normal
printf, and a fault is fixed in mp_obj_class_lookup with debugging enabled;
see issue #3999. Debugging can now be enabled on all ports including when
nan-boxing is used.
This patch in effect renames MICROPY_DEBUG_PRINTER_DEST to
MICROPY_DEBUG_PRINTER, moving its default definition from
lib/utils/printf.c to py/mpconfig.h to make it official and documented, and
makes this macro a pointer rather than the actual mp_print_t struct. This
is done to get consistency with MICROPY_ERROR_PRINTER, and provide this
macro for use outside just lib/utils/printf.c.
Ports are updated to use the new macro name.
The NRF52 define only covers nrf52832, so update the define checks
to use NRF52_SERIES to cover both nrf52832 and nrf52840.
Fixed machine_hard_pwm_instances table in modules/machine/pwm.c
This enables PWM(0) to PWM(3), RTCounter(2), Timer(3) and Timer(4),
in addition to NFC reset cause, on nrf52840.
The first dynamic qstr pool is double the size of the 'alloc' field of
the last const qstr pool. The built in const qstr pool
(mp_qstr_const_pool) has a hardcoded alloc size of 10, meaning that the
first dynamic pool is allocated space for 20 entries. The alloc size
must be less than or equal to the actual number of qstrs in the pool
(the 'len' field) to ensure that the first dynamically created qstr
triggers the creation of a new pool.
When modules are frozen a second const pool is created (generally
mp_qstr_frozen_const_pool) and linked to the built in pool. However,
this second const pool had its 'alloc' field set to the number of qstrs
in the pool. When freezing a large quantity of modules this can result
in thousands of qstrs being in the pool. This means that the first
dynamically created qstr results in a massive allocation. This commit
sets the alloc size of the frozen qstr pool to 10 or less (if the number
of qstrs in the pool is less than 10). The result of this is that the
allocation behaviour when a dynamic qstr is created is identical with an
without frozen code.
Note that there is the potential for a slight memory inefficiency if the
frozen modules have less than 10 qstrs, as the first few dynamic
allocations will have quite a large overhead, but the geometric growth
soon deals with this.
When waking from stop mode most of the system is still in the same state as
before entering stop, so only minimal configuration is needed to bring the
system clock back online.
The WiPy machine.Timer class is very different to the esp8266 and esp32
implementations which are better candidates for a general Timer class. By
moving the WiPy Timer docs to a completely separate file, under a new name
machine.TimerWiPy, it gives a clean slate to define and write the docs for
a better, general machine.Timer class. This is with the aim of eventually
providing documentation that does not have conditional parts to it,
conditional on the port.
While the new docs are being defined it makes sense to keep the WiPy docs,
since they describe its behaviour. Once the new Timer behaviour is defined
the WiPy code can be changed to match it, and then the TimerWiPy docs would
be removed.
This patch makes the Thumb-2 native emitter use wide ldr instructions to
call into the runtime, when the index into the native glue function table
is 32 or greater. This reduces the generated assembler code from 10 bytes
to 6 bytes, saving RAM and making native code run about 0.8% faster.
A recent version of arm-none-eabi-gcc (8.2.0) will warn about unused packed
attributes in USB_WritePacket and USB_ReadPacket. This patch suppresses
such warnings for this file only.
This error message did not consume all of its variable args, a bug
introduced long ago in baf6f14deb. By fixing
it to use %s (instead of keeping the string as-is and deleting the last
arg) the same error message string is now reused three times in this format
function and gives a code size reduction of around 130 bytes. It also now
gives a better error message when a non-string is passed in as an argument
to format, eg '{:d}'.format([]).
The machine module should be standard across all ports so should have the
same set of classes in the docs. A special warning is added to the top of
the machine.SD class because it is not standardised and only available on
the cc3200 port.
It's fair to just provide a link to all available modules, regardless of
the port. Most of the existing ports (unix, stm32, esp8266, esp32) share
most of the same set of modules anyway, so no need to maintain separate
lists for them. And there's a big discussion at the start of this index
about modules not being available on a given port.
For port-specific modules, they can also be listed unconditionally because
they have headings that explicitly state they are only available on certain
ports.
Works with pins declared normally in mpconfigboard.h, eg. (pin_XX), as well
as (pyb_pin_XX).
Provides new mp_hal_pin_config_alt_static(pin_obj, mode, pull, fn_type)
function declared in pin_static_af.h to allow configuring pin alternate
functions by name at compile time.
This mechanism will scale to to an arbitrary number of pollable objects, so
long as they implement the MP_STREAM_GET_FILENO ioctl. Since ussl objects
pass through ioctl requests transparently to the underlying socket object,
it will allow ussl sockets to be polled. And a user object with uio.IOBase
as a base could support polling.
The underlying socket can handling polling, and any other transparent ioctl
requests. Note that CPython handles the case of polling an ssl object by
polling the file descriptor of the underlying socket file, and that
behaviour is emulated here.
Otherwise they may be called on a socket that no longer exists.
For example, if the GC calls the finaliser on the socket and then reuses
its heap memory, the "callback" entry of the old socket may contain invalid
data. If lwIP then calls the TCP callback the code may try to call the
user callback object which is now invalid. The lwIP callbacks must be
deregistered during the closing of the socket, before all the pcb pointers
are set to NULL.
The code was dereferencing 0x800 and loading a value from there, trying to
use a literal value (not address) defined in the linker script
(_ram_fs_cache_block_size) which was 0x800.
This change brings the following benefits:
- all existing tests and test behaviour is be retained
- can now use Travis parallel build mechanism
- total time for tests is about 5 mins 30 secs, down from around 10 mins
- two additional test suites are now run: standard (non coverage) unix
build and nanbox unix build
- much easier to see what is failing: if you click through to the Travis CI
details each parallel build job is displayed with pass/fail
- scales much better when adding new test targets
Adding MICROPY_FATFS as makefile flag in order to explicitly
include oofatfs files to be compiled into the build.
The flag is set to 0 by default. Must be set in addition to
MICROPY_VFS and MICROPY_VFS_FAT in mpconfigport.h.
Temporarly solving the issue of
"differ from the size of original declaration [-Werror=lto-type-mismatch]
until linker is fixed in upcomming release of gcc.
Bug is reported by others, and will be fixed in next version of arm-gcc.
However, this patch makes it possible to use modmusic and modimage
with current compilers.
Alternativly, the code can be compiled with LTO=0, but uses valuable 9K
more on this already squeezed target (microbit).
Support added for s132/s140 v6 in linker scripts and boards.
Support removed for s132 v2/3/5.
Download script updated to fetch new stacks and removed the
non-supported ones.
ble_drv.c updated to only handle s110 v8, and s132/s140 v6.
ubluepy updated to continue scanning after each individual scan
report reported to the module to keep old behaviour of the
Scanner class.
This patch generalize the feather52 target to be a board without
an in-built Bluetooth stack or bootloader giving all flash memory to
micropython code.
This way the feather52 target can run any supported Bluetooth LE
stack the port supports for other nrf52832 targets. Hence, this
make Makefiles/linker scripts and BLE driver support easier
to maintain in the future.
Current adoption on top of nrfx only reads the GPIO->IN register.
In order to read back an output state, nrf_gpio_pin_out_read has
to be called.
This patch concatinate the two read functions such that, if
either IN or OUT register has a value 1 it will return this,
else 0.
Updating lib/nrfx submodule to latest version of master to get
the new GPIO API to read pin direction.
(nrfx: d37b16f2b894b0928395f6f56ca741287a31a244)
Cleaning up use of "pyb" module.
Moving the file to a new folder and updating the
makefile accordingly. New module created called
"board" to take over the functionality of the legacy
"pyb" module.
Updating outdated documentation referring to pyb.Pin,
to now point to machine.Pin.
As EasyDMA variant of SPI(M) might clock out an additional byte
in single byte transactions this patch moves the nrf52832 to
use SPI and not SPIM to get more stable data transactions.
Ref: nrf52832 rev2 errata v1.1, suggested workaround is:
"Use the SPI module (deprecated but still available) or
use the following workaround with SPIM ..."
Current nrfx SPIM driver does not contain this workaround,
and in the meanwhile moving back to SPI fixes the issue.
Also, tabbing the nrfx_config.h a bit to make it more readable.
This patch moves the check of SPI configuration before
including any SPI header files. As targets might disable SPI
support, current code ends up in including SPIM if not SPI
is configured. Hence, this is why the check whether the module is
enabled should be done before including headers.
dfu-gen .PHONY target is run unconditionally as first build
target when included, and might fail if the hex file is not
yet generated.
To prevent this, the dfu-gen and dfu-flash targets are moved
to the main Makefile and only exposed if feather52 is the
defined BOARD.
Update configuration define from
MICROPY_HW_HAS_BUILTIN_FLASH to MICROPY_MBFS.
MICROPY_MBFS will enable the builtin flash as
part of enabling the micro:bit FS.
Clang understands only -fshort-enums, not --short-enums. As
--short-enums isn't even mentioned in the gcc man page, I think this
alias exists more for backwards compatibility.
Removing unused nrf52832_512k_64k_s132_5.0.0.ld.
Adding new linker script s132_5.0.0 following new
linker script scheme.
Updating ble_drv.c to handle de-increment of
outstanding tx packets on hvx for s132 v5.
After nrfx 1.0.0 a new macro was introduced to do a common
hardware timeout. The macro function triggers a counter of
retries or a timeout in us. However, in many cases, like in
nrfx_adc.c the timeout value is set to 0, leading to a infinite
loop in mp_hal_delay_us. This patch prevents this from happening.
Path of error:
nrfx_adc.c -> NRFX_WAIT_FOR -> NRFX_DELAY_US -> mp_hal_delay_us.
This patch also opens up for all arguments to be set as positional
arguments such that an external user of the make_new function can set
provide all parameters as positional arguments.
machine/i2c already uses mp_hal_get_pin_obj which
points to pin_find function in order to locate correct
pin object to use.
The pin_find function was recently updated to also
being able to locate pins based on an integer value,
such that pin number can be used as argument to object
constructors.
This patch modfies and uniforms pin object lookup for
SPI, music and pwm.
Renaming config for enabling random module with hw
random number generator from MICROPY_PY_HW_RNG to
MICROPY_PY_RANDOM_HW_RNG to indicate which module it
is configuring.
Also, disabling the config by default in mpconfigport.h.
Adding the enable of RNG in all board configs.
Moving ifdef in modrandom, which test for the config being
set, earlier in the code. This is to prevent un-necessary
includes if not needed.
Increase the maximum number of queued notifications from 1 to 6. This
massively speeds up the NUS console - especially when printing large
amounts of text. The reason is that multiple transfers can be done in a
single connection event, in ideal cases 6 at a time.
This patch moves all nrf52 targets to use SPIM backend
for SPI which features EasyDMA. The main benefit of doing
this is to utilize the SPIM3 on nrf52840 which is
EasyDMA only peripheral.
This patch ads irq method to the pin object. Handlers
registered in the irq method will be kept as part of the
ROOT_POINTERS.
In order to resolve which pin object is the root of the
IRQ, the pin_find has been extended to also be able to
search up Pin objects based on mp_int_t pin number.
This also implies that the Pin.new API is now also supporting
creation of Pin objects based on the integer value of the
pin instead of old style mandating string name of the Pin.
All boards have been updated to use real pin number from
0-48 instead of pin_Pxx for UART/SPI and music module pins.
UART/SPI/modmusic has also been updated to use pin number
provided directly or look up the Pin object based on the
integer value of the pin (modmusic).
Pin generation has been updated to create a list of pins, where
the board/cpu dicts are now refering to an index in this list
instead of having one const declaration for each pin. This new
const table makes it possible to iterate through all pins generated
in order to locate the correct Pin object.
In order to be able to support GPIO1 port on nrf52840
the port has been removed from the Pin object.
All pins on port1 will now be incrementally on top of
the pin numbers for gpio0. Hence, Pin 1.00 will become
P32, and Pin 1.15 will become P47.
The modification is done to address the new gpio HAL
interface in nrfx, which resolves the port to be
configured base on a multiple of 32.
The patch also affects the existing devices which does
not have a second GPIO port in the way that the
port indication A and B is removed from Pin generation.
This means that the port which was earlier addressed
as PA0 is now P0, and PA31 is P31.
Also, this patch removes the gpio member which earlier
pointed to the perihperal GPIO base address. This is not
needed anymore, hence removed.
With all the variation in chips and boards it's tedious to copy and
redefine linker scripts for every option. Making linker scripts more
modular also opens up more possibilities, like enabling/disabling the
flash file system from the Makefile - or even defining it's size from a
Makefile argument (FS_SIZE=12 for a 12kB filesystem if tight on space).
Summarized this squashed PR replaces the hal/ folder in the port. This has been replaced the official
HAL layer from Nordic Semiconductor; https://github.com/NordicSemiconductor/nrfx.
A Git submodule has been added under lib/nrfx, for the nrfx dependency.
The drivers / modules has been updated to use this new HAL layer; nrfx at v1.0.0.
Also, header files and system files for nrf51/nrf52x chip variants has been deleted from the device/ folder, only keeping back the startup files written in C. All other files are now fetched from nrfx.
3 new header files in the ports/nrf/ folder has been added to configure nrfx (nrfx_config.h), logging (nrfx_log.h) and glue nrfx together with the drivers and modules from micropython (nrfx_glue.h).
The PR has been a joint effort from @aykevl (Ayke van Laethem) and @glennrub.
For reference, the commit log will be kept to get an overview of the changes done:
* ports/nrf: Initial commit for moving hal to Nordic Semiconductor BSD-3 licensed nrfx-hal.
* ports/nrf: Adding nrfx, Nordic Semiconductor BSD-3 hal layer, as git submodule checked out at lib/nrfx.
* ports/nrf/modules/machine/uart: Fixing bug which set hwfc to parity excluded, always resulting in no flow control, hence corrupted output. Also adding an extra loop on uart_tx_char to prevent any tx when any ongoing tx is in progress.
* ports/nrf/i2c: Moving I2C over to nrfx driver.
* ports/nrf/modules/machine/i2c: Alignment. Renaming print function param 'o' to 'self_in'
* ports/nrf/spi: Updating SPI machine module to use nrfx drivers.
* ports/nrf: Renaming modules/machine/rtc.c/.h to rtcounter.c/.h to not confuse the peripheral with Real-Time Clock:
* ports/nrf: Updating various files after renaming machine module RTC to RTCounter.
* ports/nrf: Renaming RTC to RTCounter in modmachine globals dict table. Also updating object type name to reflect new module name.
* ports/nrf: Fixing leftovers after renaming rtc to rtcounter.
* ports/nrf: Early untested adoption of nrfx_rtc in RTCounter. Untested.
* nrf/modules/machine/i2c: Improve keyword argument handling
* ports/nrf/modules/temp: Updating Temp machine module to use nrfx defined hal nrf_temp.h. Moving logic of BLE stack awareness to machine module.
* ports/nrf/boards/pca10040: Enable machine Temp module.
* nrf/modules/machine/rtcounter: Remove magic constants.
* ports/nrf: Adding base support for nrfx module logging. Adding option to disable logging of UART as it might log its own setup over UART while the peripheral is not yet set up. Logging of UART could make sense if other transport of log is used.
* ports/nrf: updating nrfx_log.h with more correct parenthisis on macro grouping.
* ports/nrf: Updating nrfx logging with configuration to disable logging of UART module. The pattern can be used to turn off other modules as well. However, for now UART is the only module locking itself by logging before the peripheral is configured. Logging is turned off by default, can be enabled in nrfx_config.h by setting NRFX_LOG_ENABLED=1.
* ports/nrf/modules/random: Updating modrandom to use nrfx hal for rng. Not using nrfx-driver for this peripheral as its blocking mode would do the trick on RNG. Moving softdevice aware code from legacy hal to modrandom.c.
* nrf: Enable Peripheral Resource Sharing.
This enables TWI and SPI to be enabled at the same time.
* nrf/Makefile: Define MCU sub variant (e.g. NRF51822/NRF51422)
* nrf: Port TIMER peripheral to nrfx HAL.
* nrf/modules/machine/uart: Optimize UART module
For a nRF51, this results in a size reduction of:
.text: -68 bytes
.data: -56 bytes
* nrf/modules/machine/uart: Don't use magic index numbers.
* nrf/modules/machine/uart: Fix off-by-one error.
For nrf51:
.text: -40 bytes
* nrf/modules/machine/rtcounter: Update for nrfx HAL.
* nrf/modules/machine/i2c: Reduce RAM consumption.
Reductions for the nrf51:
flash: -108 bytes
RAM: -72 bytes
* nrf/mpconfigport: Avoid unnecessary root pointers.
This saves 92 bytes of RAM.
* nrf: Support SoftDevice with nrfx HAL.
* nrf: Add NVMC peripheral (microbitfs) support.
There is no support yet for a SoftDevice.
It also fixes a potentially serious bug in start_index generation.
* nrf/modules/machine/spi: Optimize SPI peripheral.
nrf51:
text: -340 bytes
data: -72 bytes
nrf52:
text: -352 bytes
data: -108 bytes
* nrf/modules/random: Forgot to commit header file.
* nrf: Make nrfx_config.h universal for all boards.
* nrf: Use SoftDevice API for flash access when built for SD
* nrf/drivers/bluetooth: Remove legacy HAL driver includes.
These were not used anymore so can be removed.
* ports/nrf/microbit: Port microbit targets to nrfx HAL
Initial port of microbit modules to use nrfx HAL layer.
Tested display/image and modmusic on micro:bit to verify that
softpwm and ticker for nrf51 is working as expected.
Changing IRQ priority on timer to priority 2, as 1 might collide if
used side by side of SD110 BLE stack.
The patch reserves Timer1 peripheral compile time. This is not ideal
and should be resolved in seperate task.
* nrf/boards/microbit: Remove custom nrfx_config.h from microbit target, adding disablement of timer1 if softpwm is enabled.
* nrf/adc: Update ADC module to use nrfx
* nrf/modules/machine/pwm: Updating machine PWM module to use nrfx HAL driver.
examples/nrf52_pwm.py and examples/nrf52_servo.py tested on pca10040.
* nrf: Removing hal folder and boards nrf5x_hal_conf.h headers.
* nrf/nrfx_glue: Adding direct NVIC access for S110 BLE stack
If SoftDevice s110 has not yet been initialized, the IRQ will not be forwarded to
the application using the sd_nvic* function calls. Hence, direct access to cmsi
nvic functions are used instead if SoftDevice is not enabled.
* nrf/drivers/ticker: Setting IRQ priority 3 on Timer1
SoftDevice fails to initilize if Timer1 has been configured to priority
level 2 before enabling the SD. The timer is set to priority 1, higher than BLE
stack in order to provide better quality of music rendering when used with the
music module. This might be too high, time will show.
* nrf/examples: Updating ubluepy_temp after moving RTCounter to nrfx.
* nrf: delete duplicate files from device folder which can be located in nrfx/mdk.
* nrf/Makefile: Fetch system files from nrfx.
Testing on each device sub-variant to figure out which system file to
use. Reason for this is that nrf52.c is actually defining nrf52832.
Removing NRF_DEFINES parameter setting the device in use into the
same sub-variant test, as NRF52 is unique to nrf52832 when using nrfx.
Without this exclusion of -DNRF52 in compilation for nrf52840, the
device will be interpreted as a nrf52, hence nrf52832.
Also, changing name on variable SRC_NRF_HAL to SRC_NRFX_HAL to
explicitly tell the origin of the file.
* nrf: Updating device #ifdefs to be more open to non-nrf51 targets.
* nrf/modules/machine/uart: Removing second instance of UART for nrf52840 as it only has one non-DMA variant.
* nrf/device: Removing system files as these are now used from nrfx/mdk
* nrf: Moving startup files in device one level up as there is no need for deep hierarchy.
* nrf: Use NRF52_SERIES defined in nrfx/mdk/nrf.h as define value when testing for both nrf52(832) and nrf52840 variants.
* nrf/modules/machine/uart: Enable UART RX by default
Enable rx by default after intiialization of the peripheral.
Else, the nrfx driver will re-enable rx for each byte read
on uart REPL, clearing the EVENT_RXDRDY before second byte,
which again will make second byte get lost and read will get stuck.
This happens if the bytes are transmitted nrf(51) while still
processing the previous byte. Not seen on nrf52, but should
also become an issue at higher speeds.
This patch sets rx to always be enabled. Hence, not clearing the event
between read bytes, and it will be able to detect next byte recieved
upon finishing the first.
* nrf/modules/machine/timer: Fixing defines excluding Timer1 if ticker/softpwm is used.
* nrf: Switching import form mpconfigboard.h to mpconfigport.h in nrfx_config.h as mpconfigboard.h might define default values for defines not set by board specific header.
* nrf/modules/machine/i2c: nrfx integration fixes
Increasing speed to 400K.
Returning Address NACK's as MP error code; MP_ENODEV.
Returning MP_ETIMEOUT on all other error codes from TWI nrfx driver
except the ANACK.
Enabling and disabling the TWI peripheral before and after each transaction.
* nrf/examples: Updating ssd1306_mod.py to split framebuffer transfer into multiple chunks
* nrf/modules/machine/i2c: Return MP_EIO error if Data NACK occurs.
* nrf: Addressing review comments.
* nrf: Updating git submodule and users to nrfx v1.0.0.
* nrf/modules/machine/adc: Update adc module to follow v1.0.0 nrfx API.
* nrf/modules/machine/spi: Implement init and deinit functions
Extending SPI objects with a config member such that
configuration can be kept between new() and init().
Moving initialization done in new() to common init
function shared between the module functions.
If SPI is already configured, the SPI peripheral will
be uninitialized before initalized again.
Adding logic to handle initialization of polarity and
phase. As well, updating default speed to 1M from 500K.
* nrf/modules/machine: Removing unused nrfx includes in machine module header files
When compiling for microbit with LTO=0, a compiler error occurs due to
'ms' variable in the microbit_sleep function has not been initialized.
This patch initialize the variable to 0.
The nrf51x22_256k_16k_s110_8.0.0.ld had a stack size of only 1kB, which
is way too low. Additionally, the indicated _minimum_stack_size (set at
2kB for that chip) isn't respected.
This commit sets the heap end based on the stack size (heap end = RAM
end - stack size) making it much easier to configure.
Additionally, the stack/heap size of nrf52 chips has been set to a more
sane value of 8kB.
Leave it enabled by default on all targets.
This is only possible when using UART-over-BLE (NUS) instead of the
default hardware peripheral. The flash area saved is quite substantial
(about 2.2KB) so this is useful for custom builds that do not need UART.
Feather52 target which is using SD s132 v.2.0.1 cannot compile
due to variable containing RAM start address is not used.
This patch enables the correct sd_ble_enable variant for this SD.
Similar commit to this one:
6e56e6269f
When .text + .data oveflow available flash, the linker may not show an
error. This change makes sure .data is included in the size calculation.
This frees 128 bytes of .bss RAM on the nRF51, at the cost of possibly
more expensive GC cycles. Leave it as-is on the nRF52 as that chip has a
lot more RAM.
This is also done in the micro:bit:
a7544718a7/inc/microbit/mpconfigport.h (L6)
* ports/nrf/boards: Adding linker script for nrf52832 using BLE stack s132 v.5.0.0.
* ports/nrf/drivers/bluetooth: Updating makefile to add BLE_API_VERSION=4 if s132 v5.0.0 is used.
* ports/nrf/drivers/bluetooth: Updating BLE stack download script to also download S132 v5.0.0.
* ports/nrf/drivers/bluetooth: Updating ble_drv.c to handle BLE_API_VERSION=4 (s132 v5.0.0).
* ports/nrf/boards: Updating linker script for nrf52832 with s132 v.5.0.0 bluetooth stack.
* ports/nrf/drivers/bluetooth: Removing commented out code in ble_drv.c
* ports/nrf/drivers/bluetooth: Updating define of GATT_MTU_SIZE_DEFAULT for SD132v5 to be defined using the new name defined in the SD headers in a more generic way.
* ports/nrf/drivers/bluetooth: Cleaning up use of BLE_API_VERSION in the ble_drv.c. Also considering s140v6 API, so not all has been changed to >= if API version 3 and 4 in combo is used. New s140v6 will differ on these, and add a new API not compatible with the API for 3 and 4.
- Rename microbit_module_init to board_module_init0 which is the generic
board module init function.
- Add low priority callback registration of display tick handler in the
module init function.
- Rename init function to ticker_init0.
- Implement ticker_register_low_pri_callback (recycle of unused
set_low_priority_callback function which was unimplemented).
- Add support for registering 2 low pri callbacks. For now, one intended
for microbit display, and one for modmusic.
* ports/nrf: Add micro:bit filesystem.
This filesystem has been copied from BBC micro:bit sources [1] and
modified to work with the nRF5x port.
[1]: https://github.com/bbcmicrobit/micropython/blob/master/source/microbit/filesystem.c
* ports/nrf/modules/uos: Make listdir() and ilistdir() consistent.
This removes the optional direcotry paramter from ilistdir(). This is
not consistent with VFS, but makes more sense when using only the
microbit filesystem.
Saves about 100 bytes.
* ports/nrf/modules/uos: Add code size comment.
* Remove FLASH_ISR and merge .isr_vector into FLASH_TEXT. This saves
some code space, especially on nRF52 devices.
* Reserve space for nonvolatile storage of data. This is the place for
a filesystem (to be added).
The patch enables the possibility to disable or initialize the repl
info from outside of the module. Can also be used to initialize the
repl_display_debugging_info in pyexec.c if not startup file is clearing
.bss segment.
This commit is a combination of about 802 commits from the initial stages
of development of this port, up to and including the point where the code
was moved to the ports/nrf directory. The following is a digest of the
original commits in their original order (most recent listed first),
grouped where possible by author. The list is here to give credit for the
work and provide some level of traceability and accountability. For the
full history of development please consult the following repository:
https://github.com/tralamazza/micropython
Unless otherwise explicitly state in a sub-directory or file, all code is
MIT licensed and the relevant copyright holders are listed in the
comment-header of each file.
Glenn Ruben Bakke <glennbakke@gmail.com>
ports/nrf: Moving nrf51/52 port to new ports directory
nrf: Aligning with upstream the use of nlr_raise(mp_obj_new_exception_msg(&mp_type_ValueError, ...)
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf/modules/random: Backport of microbit random number generator module
Backport of micro:bit random module.
Plugged into the port as a general random module for all nrf51/nrf52 targets. Works both with and without Bluetooth LE stack enabled.
Behavioral change: seed() method has been removed, as the use of RNG peripheral generates true random sequences and not pseudo-random sequences.
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf/hal/rng: Adding HAL driver for accessing RNG peripheral
The driver also takes care of calling the Bluetooth LE stack for random values if the stack is enabled. The reason for this is that the Bluetooth LE stack take ownership of the NRF_RNG when enabled. Tolerate to enable/disable on the fly, and will choose to use direct access to the peripheral if Bluetooth LE stack is disabled or not compiled in at all.
Driver has been included in the top Makefile, and will not be compiled in unless nrf51_hal_conf.h/nrf52_hal_conf.h defines HAL_RNG_MODULE_ENABLED (1).
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf/boards: Adding Arduino Primo board support (#88)
* nrf: Adding Arduino Primo board support
* nrf: Adding arduino_primo to target boards table in readme.md
* nrf/boards: Activating pyb.LED module for arduino_primo board.
* nrf/boards: Removing define not needed for arduino_primo
Updating arduino_primo board mpconfigboard.h. Removing a define
that was wrongly named. Instead of renaming it, it was removed as
it was never used.
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf: Add support for floating point on nrf52 targets.
Duplicating pattern for detecting location of libm, libc and libgcc
from teensy port. Activating MICROPY_FLOAT_IMPL (FLOAT) for nrf52 targets
and adding libs into the compile. For nrf51 targets it is still set to
NONE as code grows to much (about 30k).
Some numbers on flash use if MICROPY_FLOAT_IMPL is set to
MICROPY_FLOAT_IMPL_FLOAT and math libraries are enabled (lgcc, lc, lm).
nrf51:
======
without float support:
text data bss dec hex filename
144088 260 30020 174368 2a920 build-pca10028/firmware.elf
with float support:
text data bss dec hex filename
176228 1336 30020 207584 32ae0 build-pca10028/firmware.elf
nrf52:
======
without float support:
text data bss dec hex filename
142040 356 36236 178632 2b9c8 build-pca10040/firmware.elf
with float support:
text data bss dec hex filename
165068 1436 36236 202740 317f4 build-pca10040/firmware.elf
Daniel Tralamazza <daniel@tralamazza.com>
nrf: add a note for running the nrfjprog tool on Linux, and touch up the make sd comment
nrf: clean compiler warnings
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf/drivers/bluetooth: Speedup Bluetooth LE REPL.
Updating mp_hal_stdout_tx_strn_cooked to pass on the whole string
to mp_hal_stdout_tx_strn instead of passing byte by byte.
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf: Use the name MicroPython consistently in comments
nrf5: Updating readme with BLE REPL
Ben Whitten <ben.whitten@lairdtech.com>
nrf/boards: Add DVK BL652 from Laird
To build run 'make BOARD=dvk_bl652 SD=s132'
To flash with jlink run 'make sd BOARD=dvk_bl652 SD=s132'
This will remove the existing licences in the bl652
Ben Whitten <ben.whitten@lairdtech.com>
nrf/drivers/bluetooth: Allow s132 to use LFCLK
nrf: Add nordic sd folders to the .gitignore
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf/boards: Updating microbit pin mapping for SPI and I2C.
nrf/boards: Correcting feather52 I2C SDA pin assigned to the board.
nrf/examples: Update ssd1306 modification example to import correct class.
nrf/boards: Activate RTC and Timer module and HAL on pca10056. Also swapping out UART with UART DMA variant on this target board.
nrf/boards: Activate RTC, Timer, I2C, ADC and HW_SPI module and HAL on pca10031.
nrf/boards: Activate RTC, Timer, I2C and ADC module and HAL on pca10001.
nrf/boards: Adding RTC and Timer module and HAL to pca10000.
nrf: Updating README.
nrf: Removing unused font header.
Daniel Tralamazza <daniel@tralamazza.com>
rename temperature example
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf5/examples: Adding ubluepy peripheral example that works across nrf51 and nrf52. The example uses Environmenting Sensing Service to provide the temperature characteristic. The temperature is fetched from the machine.Temp module. One note is that the example uses 1 LED which is not present on all boards.
nrf5/modules/ubluepy: Adding new event constant for gatts write (80) events from bluetooth stacks.
nrf5/hal/timer: Add support for fetching temperature if bluetooth stack is enabled.
nrf5/drivers/bluetooth: Make printf in 'ble_drv_service_add' function part of debug log.
Daniel Tralamazza <daniel@tralamazza.com>
implement #50
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf5/examples: Updating mountsd example with comment from deleted sdcard.py on how to wire SD directly to SPI.
nrf5/examples: Removing copy of sdcard.py also found in drivers/sdcard.
nrf5/examples: Removing copy of ssd1306 driver, creating a new class that overrides the needed function for i2c. Also adding some example usage in the comment in top of the file for both SPI and I2C variant.
nrf5/hal/gpio: Updating toggle inline function to work correctly, currently only used by LED module.
nrf5/examples: Renaming servo.py to nrf52_servo.py as it is only implemented machine.PWM for nrf52.
nrf5/freeze: Adding generic example to freeze. Hello world with board name as parameter.
nrf5/examples: Moving nrf52 specific HW example from freeze to examples to replace test.py with a more generic example.
nrf5: Update pyb module, and led module to only be compiled in if MICROPY_HW_HAS_LED is set to 1.
nrf5/boards: Updating boards with correct LED count. Also adding new flag, MICROPY_HW_HAS_LED, to select whether the board has LED's at all. If not, this will unselect LED module from being compiled in.
nrf5/boards: Updating pca10040 board header to set the LED count.
nrf5: Generalize script setting LED(1) on to be applied only when there are leds present on the board.
nrf5: Updating mpconfigport.h to set default values for MICROPY_HW_LED_COUNT (0) and MICROPY_HW_LED_PULLUP (0).
nrf5/boards/feather52: Update s132 target makefile with dfu-gen and dfu-flash. This enables feather52 with Bluetooth LE. Features to be configured in bluetooth_conf.h.
nrf5/boards/feather52: Add SERIAL makeflag if dfu-flash target is used.
nrf5: Updating readme.md file based on review comments.
nrf5: Update help.c with documentation of CTRL-A and CTRL-B to enter and exit raw REPL mode.
nrf5: Updating main.c to support RAW REPL.
Update README.md
nrf5/modules/music: Updating pitch method to also use configured pin from mpconfigboard.h if set, in the case of lacking kwarg for pin. Also removing some commented out arguments to remove some confusion in the argument list. Done for both play() and pitch().
nrf5/modules/music: Correct parameter checking of pin argument to deside whether to use MUSIC_PIN define or throw an error. If MUSIC_PIN define is configured the pin argument to music module play() can be elided.
nrf5/modules/machine: Update timer init to set default IRQ priority before initializing Timer instance.
nrf5/hal/timer: Update timer hal to use value provided in init to configure the irq_priority.
nrf5/modules/machine: Reserving timer0 instance for bluetooth if compiled in. Leaving timer1 and timer2 for application. Note that music module soft-pwm will also occupy timer1 if enabled.
nrf5/modules/machine: Updating timer module to use new hal. Adding new parameters to the init to set period, mode and callback.
nrf5/hal/timer: Implementing hal_timer to 1us prescaler. Multiplier inside to get to millisecond resolution. Callback must be registered before starting a timer.
nrf5: Makefile cleanup. Removing duplicate include and unused netutils.c used by BLE 6lowpan network which has been removed for now.
nrf5/modules/machine: Indention fix in uart module.
nrf5/modules/machine: Removing unused code from uart module.
nrf5/hal/rtc: Updating hal driver to calculate prescaler a bit more verbose. Using 1 second interval ticks.
nrf5/modules/machine: Fixing type in RTC.
nrf5/modules/machine: Update rtc init to set default IRQ priority before initializing RTC instance.
nrf5/hal/rtc: Aligning RTC (real-time counter) HAL driver with Timer HAL driver. To make api's symetric. Also updating modules/rtc to get aligned with new HAL api.
nrf5/drivers/bluetooth: Moving stop condition initialization before call to bluetooth stack write function is done, to make sure that its not overwritten after reception of the write event in case of with_response writes.
nrf5/drivers/bluetooth: Removing duplicate static variable declaration.
nrf5/modules/ubluepy: Updating characteristic write method to take in an additional keyword, 'with_response'. Default value is False. Only activated in central role.
nrf5/drivers/bluetooth: Updating ble_drv_attr_c_write with possibility to do client write with response. Blocking call.
nrf5/examples: Adding some notes on which pin layout that has been used in the seeed_tft.py ILI9341 driver for driving the display.
nrf5/examples: Shorten name on seeedstudio_tft_shield_v2.py to seeed_tft.py.
nrf5/examples: Updating ili9341 example to use new Frambuffer object instead of legacy Framebuffer1.
nrf5/examples: Removing seeed.py which used a lcd mono framebuffer has been removed.
Matt Trentini <matt.trentini@gmail.com>
Adding a README for the nRF5 port
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf5/examples: Updating documentation in SDCard module example. Correcting typo and adding SD card wireing documentation for direct SPI connection.
nrf5/modules/pin: Adding on() and off() methods to Pin object to be forward compatible with upstream master. Legacy high() and low() methods are kept.
nrf5/modules/spi: Remove pyb abstraction from SPI module, as there was a bug in transfer of bytes due to casting errors. The update removes the pyb_spi_obj_t wrapper going directly on the machine_hard_spi_obj_t as base for machine SPI objects. SDCard mounting is also tested.
nrf5/drivers/bluetooth: Enable ubluepy central by default if running nrf52/s132 bluetooth stack. Maturity of the module is pretty OK now.
nrf5/boards/feather52: Updating pins.csv for the feather52 board.
nrf5/boards/feather52: Updating LED pull to low.
nrf5/boards/feather52: Update SPI pinout.
nrf5/main: Move initializaton of modmusic to the module itself. Upon init of the module, the hardware, pwm and ticker will be started. Could be moved back to main if pwm or ticker should be shared among more modules and have to be initialized more global.
nrf5/modules/machine/timer: If timer is used in combination with SOFT_PWM (implicitly use of ticker.c) guard the Timer1 instance from being instantiated trough python timer module. Also disable implementation of the HAL IRQ handler which is for now explicitly implemented in ticker.c for Timer1.
nrf5/modules/music: Update ticker and modmusic to share global ticks counter as a volatile variable. Use Timer1 hardware peripheral instead of instance 0. Timer0 is not free if used in combination with a bluetooth stack. Update IRQ priority to levels that are compatible in use with a bluetooth stack for both nrf51 and nrf52. Apply nrf51 PAN fixes for Timer1 instead of original Timer0.
nrf5/drivers/bluetooth: Updating bluetooth driver to initialize nrf_nvic_state_t struct during declaration of the global variable instead of explicit memset.
nrf5/hal/irq: Adding wrappers for handling nvic calls when Bluetooth LE stack is enabled.
nrf5/modules/machine: Updating IRQ levels in SPI with IRQ priorities compatible with Bluetooth stacks.
nrf5/device: Remove old startup files in asm, which has now been replaced with c-implementation.
nrf5: Update Makefile to add c-implementation of startup scripts instead of the .s files.
nrf5/device: Adding startup files in .c to replace current asm versions.
nrf5/examples: Tuning Bluetooth LE example controller python script after testing out the example live. Motor speed of 100 was not enought to lift the airplane. Also turning was hard without setting higher angle values. The new values are just guessed values. However, the flying experience was good.
nrf5/hal/irq: Adding include of nrf_nvic.h if s132 bluetooth stack is used to resolve IRQ function wrappers on newer bluetooth stacks.
nrf5/drivers/ticker: Removing unused code.
nrf5/examples: Adding music example. Only working if bluetooth stack is not enabled.
nrf5/boards/microbit: Disable music and softPWM as there are some issues with the ticker.
nrf5: Adding -fstack-usage flag to gcc CFLAGS to be able to trace stack usage on modules.
nrf5/drivers/ticker: Removing LowPriority callback from nrf51 as there is only one SoftwareIRQ free if bluetooth stack is enabled. Also setting new IRQ priority on SlowTicker to 3 instead of 2, to interleave with bluetooth stack if needed. Updating all NVIC calls to use hal_irq.h defined static inlines instead of direct access.
nrf5/hal/irq: Adding IRQ wrappers if Bluetooth Stack is present.
nrf5: Facilitate option to configure away the modble if needed. Enabled if MICROPY_PY_BLE config is enabled in bluetooth_conf.h.
nrf5/boards/microbit: Enable music module by default. However, timer and rtc module has to be disabled. Bluetooth support broken. Optimization needed.
nrf5/modules/machine: Quickfix. Update timer object to not allow instanciation of Timer(0) if SOFT_PWM is enabled by board.
nrf5/hal/timer: Quickfix. Disable IRQ handler if SOFT_PWM is configured to be enabled. Ticker driver has in current driver a seperate IRQ handler for this timer instance.
nrf5/drivers/ticker: Add compile config guard in ticker.c to only include the driver if SOFT_PWM is configured in by board.
nrf5/drivers/softpwm: Renaming pwm_init to softpwm_init to not collide on symbol name with pwm_init in nrf52 machine PWM object.
nrf5: Add modmusic QSTR definition of notes to qstrdefsport.h.
nrf5: Update Makefile to include ticker.c and renamed softpwm. Updating also include paths to include modules/music and drivers/.
nrf5: Adding include of modmusic.h in main.c.
nrf5: Call microbit_music_init0() if enabled in main.c.
nrf5/modules/music: Expose public init function for music module.
nrf5/modules/music: Update modmusic to use updated includes. Add extern ticks. Add function which implements initialization of pwm and ticker, register ticker callback, and start the pwm and ticker. This corresponds to microbit port main.cpp init.
nrf5/drivers/softpwm: Enable use of ticker in softpwm driver.
nrf5/drivers/ticker: Adding ticker.c/.h from microbit port.
nrf5/drivers/pwm: Renaming pwm.c/.h to softpwm.c/.h
nrf5/drivers/pwm: Expose pwm_init() as public function.
nrf5/modules/ubluepy: Making peripheral conn_handle volatile. Upon connection event, the variable is accessed in thread mode. However, the main-loop is blocking on conn_handle != 0xFFFF. If this is not volatile, optimized code will not exit the loop.
nrf5/drivers/bluetooth: As callback functions are in most usecases are set to NULL upon last event to get public API function out of blocking mode, these function pointers has to be set as volatile, as they are updated to NULL in interrupt context, but read in blocking main-thread.
nrf5/examples: Fixing overlapping function names and variable names inside the object. Also removing some print statements. Tuning max angle from -7/7 to -25/25.
Glenn Ruben Bakke <glennbakke@gmail.com>
Powerup (#26)
* nrf5/examples: Adding python example template for PowerUp 3.0 Bluetooth LE controlled Paper Airplane.
* nrf5: Enable bluetooth le central while developing powerup 3.0 example.
* nrf5/examples: Backing up powerup 3.0 progress.
* nrf5/examples: Adding working example on how to control PowerUp 3.0 paper airplane using bluetooth le.
* nrf5/bluetooth: Disable central role.
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf5/modules/ubluepy: Correcting alignment of enum values in modubluepy.h.
nrf5/drivers/bluetooth: Add implementation of client attribute write without response.
nrf5/modules/ubluepy: Pass on buffer to write in characteristic write central mode.
nrf5/modules/ubluepy: Updating characteristic object write function to be role aware. Either peripheral or central (gatts or gattc). Adding dummy call to attr_c_write if central is compiled in. Still in progress to be implemented.
nrf5/drivers/bluetooth: Adding template function for attr_c_write.
nrf5/drivers/bluetooth: Renaming attr_write and attr_notify to attr_s_write and attr_s_notify to prepare for introduction of attribute write for gatt client.
nrf5/modules/ubluepy: Fixing type in ubluepy_peripheral.c.
nrf5/modules/ubluepy: Setting peripheral role upon advertise() or connect().
nrf5/drivers/bluetooth: Adding role member to peripheral object to indicate whether Peripheral object is Peripheral or Central role.
nrf5/modules/ubluepy: Continue characteristic discovery until nothing more is found during connect proceedure.
nrf5/drivers/bluetooth: Refactoring code to group statics for s130 and s132 into the same ifdef. Also adding two empty lines in discovery functions to make it more easy to read.
nrf5/drivers/bluetooth: Updating characteristic discovery to signal whether anything was found or not.
nrf5/modules/ubluepy: Continue primary service discovery until nothing more is found in connect proceedure.
nrf5/drivers/bluetooth: Updating primary service discovery api to take in start handle from where to start the service discovery. Also adjusting return parameter to signal whether anything was found or not.
nrf5/modules/ubluepy: Remove duplication GAP event handler registration in peripheral.connect().
Glenn Ruben Bakke <glennbakke@gmail.com>
Support address types (#18)
* nrf5/modules/ubluepy: Adding new enumeration of address types.
* nrf5/modules/ubluepy: Adding constants that can be used from micropython for public and random static address types.
* nrf5/modules/ubluepy: Adding support for optionally setting address type in Peripheral.connect(). Public address is used as default. Address types can be retrieved from 'constants'. Either constants.ADDR_TYPE_PUBLIC or constants.ADDR_TYPE_RANDOM_STATIC.
* nrf5/modules/ubluepy: Register central GAP event handler before issuing connect to a peripheral. Has to be done before connect() function as a connected event will be propergated upon successfull connection. The handler will set the connection handle which gets connect function out of the busy loop waiting for connection to succeed.
* nrf5/modules/ubluepy: Removing duplicate setting of GAP event handler in connect().
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf5/modules/ubluepy: Register central GAP event handler before issuing connect to a peripheral. Has to be done before connect() function as a connected event will be propergated upon successfull connection. The handler will set the connection handle which gets connect function out of the busy loop waiting for connection to succeed.
nrf5/modules/ubluepy: Fixing compilation bug of wrong variable name when registering gattc event handler in ublupy peripheral connect function (central mode).
nrf5/bluetooth: Updating makefiles with updated paths to bluetooth le components after moving files.
nrf5/bluetooth: Moving stack download script to drivers/bluetooth folder.
nrf5/bluetooth: Move bluetooth driver files to drivers/bluetooth. Move bluetooth stack download script to root folder.
nrf5/bluetooth: Guarding implementation against being linked in by surrounding it with BLUETOOTH_SD flag. Flag is only set if SD=<sdname> parameter is provided during make.
nrf5/bluetooth: Moving makefile include folder and source files of bluetooth driver, ble uart and ble module to main Makefile.
nrf5/bluetooth: Moving help_sd.h and modble.c to modules/ble.
nrf5/modules/machine: bugfix after changing to MP_ROM_PTR in machine module local dict.
nrf5: Syncing code with upstream master and converting all module and method tables to use MP_ROM macros. Also adding explicit casting of local dicts to (mp_obj_dict_t*).
nrf5/modules/timer: Fixing bug in timer_find(). Function allowed to locate index out of range and started to look up in config pointer (index == size of array).
nrf5/modules/timer: Remove test which is covered by timer_find() function in the line below.
nrf5/modules/timer: Adding locals dict table and adding start/stop template functions. Also adding constants for oneshot and periodic to locals dict.
nrf5/modules/timer: Adding timer module to modmachine.
nrf5/boards: Adding micro:bit default music pin definition. Also adding config flag for enabling pwm machine module.
nrf5/hal/timer: Adding start/stop template functions to hal_timer.h/.c
nrf5/Makefile: Adding drivers/pwm.c and modules/music files to the source file list.
nrf5/modules/music: Adding config guard in musictunes.c and adding import of mphal.h.
nrf5/modules/music: Including mphal.h before config guard in modmusic.c. Also changed name on config guard to MICROPY_PY_MUSIC. Missing PWM functions during linkage will show up if PWM module has not not configured.
nrf5/drivers/pwm: Including mphal.h before config guard in pwm.c.
nrf5: Updating mpconfigport.h to include music module as builtin. Adding new configuration for enabling music module. Activating MODULE_BUILTIN_INIT in order to run music module init function on import.
nrf5/modules/music: Backing up progress in music module.
nrf5/drivers/pwm: Updating soft PWM driver to only be included if SOFT_PWM config is set.
nrf5/hal/gpio: Add function to clear output register using a pin mask.
nrf5: Adding new configuration called MICROPY_PY_MACHINE_SOFT_PWM to mpconfigport.h. This config will enable software defined PWM using timer instead of using dedicated PWM hardware. Aimed to be used in nrf51 targets.
nrf5/boards: Removing PWM config set to 0 from pca10001 board. Config will later be re-introduced as SOFT_PWM variant.
nrf5/pwm: Updating config name of PWM to hardware PWM to prepare for introduction of soft variant.
nrf5/modules/music: Backing up progress in modmusic.
nrf5/modules/music: backing up porting progress in modmusic.c.
nrf5/modules/music: Commenting out backend function calls in modmusic.c to make module compile for now.
nrf5/modules/music: Updating music module to use pin_obj_t instad of microbit_pin_obj_t. Update include to drivers/pwm.h to resolve some undefined functions.
nrf5/modules/music: Removing c++ extern definition. Updating include list in modmusic.c. Removing module name from module struct.
nrf5/modules/music: Removing include of modmicrobit.h in musictunes.c.
nrf5/modules/music: Adding header to expose extern structs defined in musictunes.c
nrf5/drivers: Adding copy of microbit soft pwm.
nrf5/modules/music: Renaming microbitmusic files to modmusic/music.
nrf5/modules/music: Renaming microbit module to music.
nrf5/modules/microbit: Copying microbit music module to the port.
nrf5/modules/timer: Adding timer3 and timer4 to timer object in case of nrf52 target.
nrf5/modules/timer: Optimizing timer object structure and updating the module to use new hal_timer_init structures and parameters.
nrf5/hal/timer: Adding empty IRQ handlers for all timers.
nrf5/hal/timer: Changing hardcoded hal timer instance base to a lookup, so that IRQ num can be detected automatically without the need of using struct param on it. Size of binary does not increase when using Os.
nrf5: Updating example in main.c on how to execute string before REPL is set up, to allow for boards with two leds. Todo for later is to update this code such that it will skip this LED toggle when there are no leds defined. Or use an example not depending on LEDs.
nrf5/bluetooth: Updating Bluetooth LE stack download script to allow to be invoked from any parent folder. No need to change directory to bluetooth/ in order to get the correct download target folder position. Using the script location to determine the target folder.
nrf5/boards: Adding board target for feather52 using s132 v.2.0.1 application offset even if the device is not using softdevice. To be worked on later.
nrf5/boards: decrease size of ISR region from 4k to 1k in custom feather52 linker script to get some more flash space.
nrf5/boards: Updating feather52 mpconfigboard.h to use correct uart pins, flow control disabled. Also adjusting leds down to two leds.
nrf5/boards: Updating path to custom linker script for feather52 board.
nrf5/boards: Renaming bluefruit_nrf52_feather to feather52 to shorten down the name quite drastically.
nrf5/boards: Updating path to custom bluefruit feather linker script after renaming board folder.
nrf5/boards: Renaming bluefruit_feather to bluefruit_nrf52_feather as it also exist a m0 variant of the board name.
nrf5/boards: Updating mpconfigboard.h for bluefruit nrf52 feather with correct board, mcu and platform name.
nrf5/boards: Updating adafruit bluefruit nrf52 feather linker script to use 0x1c000 application offset.
nrf5/boards: Renaming custom linker script for bluefruit feather to reflect that the purpose of the custom linker script is DFU. The script is diverging from the generic s132 v2 linker script in the offset of the application.
nrf5/boards: Adding custom linker script for adafruit nrf52 bluefruit feather to be able to detect application upper boundry in flash. Pointing s132 mk file to use this new custom linker script instead of the generic s132 v2 linker script.
nrf5/boards: Adding linker script for nrf52832 s132 v.2.0.1.
nrf5/boards: Adding template board makefiles and configs for bluefruit nrf52 feather. Copied from pca10040 target board. Linker script reference updated to use s132 v2.0.1. Non-BLE enable build disabled for now. Board configuration for leds, uart etc has not been updated yet from pca10040 layout.
nrf5/bluetooth: Correcting typo in test where s132 API version is settled.
nrf5/bluetooth: Updating bluetooth le driver to compile with s132 v.2.0.1 stack.
nrf5/bluetooth: Add new compiler flag to signal API variants of the s132 bluetooth le stack. The version is derived from the major number of the stack name.
nrf5/bluetooth: Remove hardcoded softdevice version as this now comes as parameter from board makefile.
nrf5/boards: Updating makefiles using bluetooth stack to use updated linker script file names.
nrf5/boards: Renaming bluetooth stack linker scripts to reflect version of the stack.
nrf5/boards: adding some spaces in s132 makefile for pca10040.
nrf5/boards: Renaming linker script for nrf52832 using bluetooth stack such that it also holds the version number of the stack. Updating linkerscript using the target linker script.
nrf5/bluetooth: Add support for downloading s132_2.0.1 bluetooth stack.
nrf5/bluetooth: Switch over to downloaded bluetooth stacks from nordicsemi.com instead of getting them through the SDK's. This will facilitate download of s132 v2.0.0 later.
nrf5/bluetooth: Fixing bug found when testing microbit. Newly introduced advertisment data pointer was not cleared on nrf51 targets. Explicit set to NULL as no additional advertisment data is set. Raises a question on why the nrf51 static variable was not zero initialized. To be checked up.
nrf5: Removing SDK_ROOT parameter to Makefile. Bluetooth stacks should be downloaded using the download_ble_stack.sh. The script should be run inside the bluetooth folder to work properly.
nrf5/bluetooth: Adding back SOFTDEV_HEX as flash tools in main Makefile uses this to locate hex file.
nrf5/bluetooth: Including bluetooth stack version in folder name after download to be able to detect if stack has been updated.
nrf5/bluetooth: Updating Bluetooth LE stack download script.
nrf5/bluetooth: Adding bash script to automate download of bluetooth le stacks
nrf5/examples: Adding example to show how to use current PWM module to control servo motors.
nrf5/modules/machine: Updating PWM module with two new kwargs parameters. One for setting pulse with more fine grained. This value should not exceed the period value. Also, adding support for setting PWM mode, whether it is LOW duty cycle or HIGH duty cycle. By default, high to low is set (this could be changed).
nrf5/hal/pwm: Updating PWM implementation to support manually set duty cycle period. Pulse width has precidence over duty cycle percentage. Also adding support for the two configurable modes, high to low, and low to high, duty cycles.
nrf5/hal/pwm: Adding more configuration options to the PWM peripheral wrapper. Possibility to set pulse with manually, and also mode. The mode indicates whether duty cycle is low and then goes high, or if it is high and then go low. Added new type to describe the two modes.
nrf5: Adding hal_gpio.c to Makefile's source list.
nrf5/modules/machine: Updating Pin module to register a IRQ callback upon GPIO polarity change events.
nrf5/hal/gpio: Adding initial gpiote implementation to handle IRQ on polarity change on a gpio.
nrf5: Moving initialization of pin til after uart has been initialized for debugging purposes. This will make it possible to use uart to print out debug data when adding gpio irq handlers.
nrf5/hal/gpio: Adding some new structures and functions to register irq channels to gpio's using GPIOTE peripheral
nrf5/hal/gpio: Adding missing include.
nrf5/modules/machine: Style fix in pin object, indention.
nrf5/modules/machine: Adding placeholder for irq method to pin object class.
nrf5/modules/machine: Adding pin irq type and basic functions and structures.
nrf5/hal/gpio: Reintroducing gpio polarity toggle event to be able to reference the short form of adding high_to_low and low_to_high together.
nrf5/hal/gpio: Updating hal_gpio.h with some tab-fixes in order to make the file a bit consistent in style.
nrf5/hal/gpio: Removing toggle event from the enumeration as that will be a combination of the rising and falling together.
nrf5/modules/machine: Removing toggle event trigger as that will be a combination of the rising and falling together.
nrf5/modules/machine: Adding new constants to pin object for polarity change triggers using the enumerated values in hal_gpio.h.
nrf5/hal/gpio: Adding new enumeration for input polarity change events.
nrf5/hal: Moving hal_gpio functions, types and defines from mphalport.h to a new hal_gpio.h.
Revert "lib/netutils: Adding some basic parsing and formating of ipv6 address strings. Only working with full length ipv6 strings. Short forms not supported at the moment (for example FE80::1, needs to be expressed as FE80:0000:0000:0000:0000:0000:0000:0001)."
nrf5: Removing leftover reference to deleted display module.
nrf5/usocket: Removing network modules related to Bluetooth 6lowpan implementation as it depends on SDK libraries for now. Will be moved to seperate working branch.
nrf5: Removing custom display, framebuffer and graphics module to make branch contain core components instead of playground modules.
nrf5/modules/usocket: Updating import of netutils.h after upmerge with upstream master.
nrf5/bluetooth: Add some comment on the destination of the eddystone short-url.
nrf5/bluetooth: Updating Eddystone URL to point to https://goo.gl/x46FES which hosts the MicroPython WebBluetooth application which will be able to connect to the Bluetooth LE UART service of the device and create the REPL.
nrf5/bluetooth: Adding webbluetooth REPL template. Alternating advertisment of eddystone URL and UART BLE service every 500 ms. Adding new config parameter to bluetooth_conf.h to enable webbluetooth repl. Has to be configured in combination with BLE_NUS. Eddystone URL not pointing to a valid WebBluetooth application at the moment, but rather to micropython.org as a placeholder for now.
nrf5/modules/ubluepy: Adding method Peripheral object to stop any ongoing advertisment. Adding compile guard to only include advertise and advertise_stop if peripheral role is compiled in.
nrf5/bluetooth: Adding function to stop advertisment if onging
nrf5/modules/ubluepy: Adding support for starting advertisment from BLE UART REPL, by delaying registration of gatt/gatts and gattc handlers until needed in advertise or connect. If non connectable advertisment is selected, handlers in peripheral new is not anymore overriding the other peripheral instances which has set the callbacks.
nrf5/bluetooth: Adding possibility to configure whether advertisment should be connectable or not.
nrf5/bluetooth: Removing legacy advertise function in the bluetooth driver, which only did a hardcoded eddystone beacone advertisment.
nrf5/help: Updating ble module help description to also include the address method.
nrf5/bluetooth: Renaming the ble module method address_print() to address(), as it will now return a string of the resolved local address. Updating the function to create a string out the local address and return this.
nrf5/bluetooth: Update ble_drv_address_get to new api which pass in a address struct to fill by reference. Updating implementation to copy the address data. Also ensuring that the bluetooth stack has been enabled before fetching the address from the bluetooth stack.
nrf5/bluetooth: Adding new structure which can hold local address. Updating api prototype for ble_drv_address_get with a address structure by reference.
nrf5/bluetooth: Updating help text for ble module to also list up enabled() function which queries the bluetooth stack on whether it is enabled or not.
nrf5/bluetooth: Removing advertise from ble module. Removing help text as well.
nrf5/examples: Adding python eddystone example using ubluepy api.
nrf5/modules/ubluepy: Open up Peripheral advertise method to pass custom data to the bluetooth driver. Allowing method to allow kwargs only if no args is set. To support setting data kwarg only.
nrf5/modules/ubluepy: Adding new members to the ublupy advertisment parameters, to hold custom data payload if set.
nrf5/bluetooth: Cleaning up stack enable function, to not set device name twice. Also, adding support for setting custom advertisment data.
nrf5/modules/ubluepy: Adding compile guard for UBLUEPY_CENTRAL around the char_read() call to ble_drv_attr_c_read().
nrf5/bluetooth: Moving central code inside central bluetooth stack defines to make peripheral only code compile again.
nrf5/examples: Updating ubluepy scan example to use constant value from ubluepy instead of hardcoded value.
nrf5/examples: Adding example on how to use the ubluepy Scanner object in order to scan for a device name and find the address of the device. This can subsequently be used to perform a Central role connect() using the Peripheral object.
nrf5/modules/ubluepy: Turn all attributes (addr, addr_type and rssi) to method calls instead of using common .attr callback. Adding getScanData implementation, which parses the advertisment data and returns a list of tuples containing (ad_type, desc, value). Description is generated by peeking into the ad_types local dicts map table, and do a reverse lookup on the value to find the QSTR.
nrf5/modules/ubluepy: Adding ad_types constants in new object. Linking in ad_types object into the ubluepy.constants local dict.
nrf5/modules/ubluepy: Expose ubluepy constant objects as externs in modubluepy.h to be able to get access to the local dict tables in order to do a reverse lookup on value to resolve QSTR from external modules in c.
nrf5/modules/ubluepy: Upon advertisment event, also store the advertisment data.
nrf5/modules/ubluepy: Adding callback function to handle read response if gatt client has issued a read request. Also adding method for returning the uuid instance from the object.
nrf5/modules/ubluepy: Adding value data member to the characteristic object. This can hold the value data when gatt client perform a read and value has to be transferred between interrupt and main thread.
nrf5/bluetooth: Updating bluetooth driver to support GATT client read of a characteristic value. Data passed to caller in interrupt context, and copy has to be performed. The function call is itself blocking.
nrf5/modules/ubluepy: Adding uuid() function to service object to return UUID instance of the service.
nrf5/modules/ubluepy: Adding binVal() function to the ubluepy UUID object. For now returning the uint16_t value of the UUID as a small integer.
nrf5/modules/ubluepy: Adding dummy function call to ble_drv_attr_c_read.
nrf5/bluetooth: Adding new api for reading attribute as gatt client. Renaming old ble_drv_attr_read function to ble_drv_attr_s_read to indicate the server role.
nrf5/bluetooth: Adding event handling cases for gatt client read, write and hvx events.
nrf5/modules/ubluepy: Tab-fix
nrf5/modules/ubluepy: Updating peripheral object to handle characteristic discovery (central mode).
nrf5/modules/ubluepy: Adding start and end handle to service object.
nrf5/bluetooth: Adding support for central characteristic service discovery. Updating primary service discovery to block until all services has been created in the peripheral object before returning from the bluetooth driver. This pattern is also applied to the characteristic discovery.
nrf5/modules/ubluepy: Updating ubluepy peripheral object to new bluetooth driver API. Starting to populate service objects and uuid objects. Also adding the service to the peripheral object throught the regular static function for adding services. Handle value for the primary service is assuming that it is the first element in the handle range; start_handle reported by the service discovery.
nrf5/bluetooth: Updating bluetooth driver to do service discovery, doing callbacks to ubluepy upon each individual primary service discovered. Using intermediate structure defined by the driver, to abstract bluetooth stack specific data in ubluepy.
nrf5/modules/ubluepy: Adding some work in progress on service discovery.
nrf5/bluetooth: Adding implementation to the discover service function. Adding handler for gatt client primary service discovery response events, and passing this to the ubluepy upon reception.
nrf5/bluetooth: Adding function parameters and return type to service and characteristic discovery template functions.
nrf5/bluetooth: Adding template functions for service discovery in bluetooth driver.
nrf5/bluetooth: Adding function to register gattc event handler (central).
nrf5/bluetooth: Adding intermediate gattc callback function type in bluetooth driver.
nrf5/bluetooth: Turning off debug logging in bluetooth driver, which does not work well with bluetooth REPL mode.
nrf5/bluetooth: Fixing some smaller tab errors in the bluetooth driver.
nrf5/bluetooth: Updating bluetooth le driver to handle GAP conn param update request. Also updating minor syntax in previous switch case.
nrf5/boards: Inrease heap size in the nrf52832 w/s132 bluetooth stack linker script.
nrf5/modules/ubluepy: Update connect method to parse dev_addr parameter and pass it to the bluetooth driver, going through a allocated heap buffer. Adding call to the bluetooth driver to issue a connect. Hardcoding address type for now.
nrf5/bluetooth: Updating connect function in the bluetooth driver to do a successful connect to a peripheral device.
nrf5/modules/ubluepy: Adding template function for central connect() in peripheral object.
nrf5/modules/ubluepy: Adding locals dict to Scan Entry introducing function to retreive Scan Data. Not working as expected together with .attr. It looks like locals dict functions are treated to be attributes and cannot be resolved.
nrf5/bluetooth: Adding function for connecting to a device (in central role). Not yet tested.
nrf5/modules/ubluepy: Return BLE peer address as string instead of bytearray. Updated struct in modubluepy.h to use a mp_obj_t to hold a string instead of a fixed 6-byte array. Stripped down ScanEntry print out to only contain class name, peer address available through addr attribute.
nrf5/bluetooth: capture address type in addition to advertisment type in bluetooth advertisment reports.
nrf5/modules/ubluepy: Correcting rssi member in scan_entry object to be int instead of uint.
nrf5/modules/ubluepy: Adding attribute to ScanEntry object for getting address (returning bytearray), type (returning int) and rssi (returning int).
nrf5/modules/ubluepy: Copy address type and rssi to the ScanEntry object upon reception of an advertisment report callback.
nrf5/bluetooth: Adding address type to bluetooth stack driver advertisment structure, and fill the member when advertisment report is received.
nrf5/modules/ubluepy: Swapping address bytes when copying bluetooth address over to ScanEntry object during advertisment scan report event.
nrf5/modules/ubluepy: Extending print of ScanEntry object to also include the bluetooth le address.
nrf5/modules/ubluepy: Create new adv report list for each individual scan. Create a new ScanEntry object instance on each advertisment event recieved and append this to the current adv_report list.
nrf5/modules/ubluepy: Adding print function to scan_entry object.
nrf5/modules/ubluepy: Populating ubluepy_scan_entry_obj_t with members that are interesting to keep for the ScanEntry object.
nrf5/bluetooth: Moving callback definitions to bluetooth driver header. Refactoring bluetooth driver, setting new names on callback functions and updating api to use new callback function name prefix.
nrf5/modules/ubluepy: Extracting advertisment reports and adding some data to list before returning it in scan() method.
nrf5/bluetooth: Adding handling of advertisment reports in bluetooth driver and issue callback to ubluepy. A bit ugly implmentation and has to be re-worked.
nrf5/bluetooth: adding adv report data structure to pass to ubluepy upon adv report event. Adding new api for setting callack where to handle advertisment events in ubluepy.
nrf5/modules/ubluepy: Adding adv_reports member to scanner object, to hold the result of scan.
nrf5/modules/machine: Cleaning up uart a bit more. Removing unused any() method, and aligning print and local dict names to use machine_uart prefix.
nrf5/bluetooth: Turn off bluetooth printf logging.
nrf5: Add back ublupy scanner and scan entry source files in Makefile.
nrf5/bluetooth: Enable implementation in scan start function in the bluetooth stack driver.
nrf5/boards: Adjust heap end after increased .data usage in nrf52832 s132 linker script.
nrf5/bluetooth: Adding more implementation in scan start function. However, commented out for time beeing, as there is some memory issues when activating central.
nrf5: Removing ubluepy scanner and scan entry from Makefile source list until nrf52 central issues has been resolved.
nrf5/bluetooth: Correcting indention.
nrf5/bluetooth: Adding some implementation to scan_start function.
nrf5/modules/ubluepy: Adding scan method to the Scanner object. Adding locals dict table.
nrf5/bluetooth: Adding empty scan_start and scan_stop function to the bluetooth driver.
nrf5/modules/ubluepy: Adding constructor function to scanner object.
nrf5/modules/ubluepy: Adding print function to Scanner object.
nrf5/modules/ubluepy: Disable all functions central related functions in the Peripheral object for now, even if MICROPY_PY_UBLUEPY_CENTRAL is enabled.
nrf5/modules/ubluepy: Activate Scanner and ScanEntry objects if MICROPY_PY_UBLUPY_CENTRAL is set.
nrf5/bluetooth: Adding new configuration flag for s132 bluetooth stack, to enable/disable ubluepy central. Disabled by default.
nrf5: Adding ubluepy_scanner.c and ubluepy_scan_entry.c to Makefile source list.
nrf5/modules/ubluepy: Adding template object typedefs for scanner and scan entry, and extern definition for scanner and scan_entry object type in modubluepy.h
nrf5/modules/ubluepy: Adding templates for central role Scanner and ScanEntry objects.
nrf5/uart: Moving UART from pyb to machine module.
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf5/uart: Refactoring UART module and HAL driver
Facilitating for adding second HW uart. Moving pyb_uart into
machine_uart. Adding return error codes from hal_uart functions,
if the hardware detects an error.
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf5/modules: Updating uart object to allow baudrate configuration.
nrf5/bluetooth: Moving bluetooth_conf.h to port root folder to make it more exposed.
nrf5/boards: Remove define of machine PWM module configuration in nrf51 targets, as the device does not have a HW PWM peripheral.
nrf5: Disable machine PWM module by default if board does not define it.
nrf5/boards: Disable all display modules in pca10028 board config.
nrf5: Updated after merge with master. Updating nlr_jump_fail to call __fatal_error in order to provide a non-returning function call.
nrf5/boards: Adding more heap memory to the nrf51 256k/32k s110 linker script. Leaving 2k for stack.
nrf5/modules/machine: Adding __WFI() on machine.deepsleep()
nrf5/modules/machine: Adding __WFE() on machine.sleep()
nrf5/modules/machine: Adding enable_irq() and disable_irq() method to the machine module. No implementation yet for the case where bluetooth stack is used.
nrf5/modules/rtc: Adding support for stopping and restarting rtc (if periodic) for all the instances of RTC.
nrf5/modules: Updating RTC kwarg from type to mode to set ONESHOT or PERIODIC mode.
nrf5/modules: Adding support for periodic RTC callback.
nrf5/hal: hal_rtc update. Adding current counter value to period value before setting it in the compare register.
nrf5/modules: Updating rtc module with non-const machine object list in order to allow setting callback function in constructor.
nrf5/hal: Adding initialization of LFCLK if not already enabled in hal_rtc.
nrf5/modules: Moving irq priority settings in RTC object to rtc_init0 when initializing the hardware instances. Also modifying comments a bit. Adding simple example in comment above make_new function on how the object is intended to work.
nrf5: Updating main.c to initialize the rtc module if enabled.
nrf5/modules: Added RTC into the machine module globals dict.
nrf5/modules: Updating rtc module. Not working yet. Updated to align with new hal_rtc interface. Added start and stop methods. Allowing callback function set from init. This should be moved to start function, not set in main.
nrf5/hal: Updating hal RTC implementation.
nrf5/hal: Adding hal_irq.h which defines a set of static inline functions to do nvic irq operations.
nrf5/modules: Updating machine uart module to use new hal uart interface name.
nrf5/hal: Renaming uart hal function to use hal_uart prefix.
nrf5/modules: Updating readfrom function in machine i2c module to use the new hal function which has been implemented.
nrf5/hal: Adding untested implementation of twi read. Lacking sensors to test with :)
nrf5/boards: Renaming linker script for all nrf51 and nrf52 into more logical names. Updating all boards with new names.
nrf5/bluetooth: Updating header guard in bluetooth_conf.h to reflect new filename.
nrf5/bluetooth: Updating old references to 'sdk' to use the new folder name 'bluetooth' in makefiles.
nrf5: Renaming sdk folder to bluetooth.
nrf5: Merging sdk makefiles into bluetooth_common.mk. s1xx_iot is still left out of this refactoring.
nrf5: Renaming nrf5_sdk_conf.h to bluetooth_conf.h
nrf5: Starting process of renaming files in sdk folder to facilitate renaming of the folder and make it more logical. Transition will be from sdk to bluetooth.
nrf5/boards: Adding support for SPI, I2C, ADC, and Temp in machine modules in micro:bit target. Also activating hal drivers for the peripherals.
nrf5/sdk: Updating low frequency clock calibration from 4 seconds to 250 ms for stack enable when BLUETOOTH_LFCLK_RC is enabled.
nrf5/boards: Updating nrf51822_aa_s110.ld to be more generic, leaving all RAM not used for stack, .bss and .data to the heap.
nrf51: Removing stack section from startup file as it got added to the final hex file. Thanks dhylands for helping out.
nrf5/boards: Adding BLUETOOTH_LFCLK_RC to CFLAGS in microbit s110 makefile.
nrf5/sdk: Adding support for initializing the bluetooth stack using RC oscillator instead of crystal. If BLUETOOTH_LFCLK_RC is set in CFLAGS, this variant of softdevice enable will be activated.
nrf5: Initialize repl_display_debugging_info in pyexec.c for cortex-m0 targets.
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf5/sdk: Updating ringbuffer.h to use volatile variables for start and end.
nrf5/sdk: Rename cccd_enable variable to m_cccd_enable in bluetooth le UART driver. Also made the variable volatile.
nrf5/modules: Updating example in ubluepy header to use handle instead of data length upon reception of an event.
nrf5/modules: Updating ubluepy peripheral to pass handle value to python event handler instead of data length. Data length can be derived from the bytearray structure.
nrf5/sdk: Updating bluetooth le driver to handle SEC PARAM REQUEST by replying that pairing is not supported. Moving initialization of adv and tx in progress state variables to stack enable function.
nrf5/modules: Enable ubluepy constants for CONNECT and DISCONNECT for other bluetooth stacks than s132.
nrf5/sdk: Fixing unaligned access issues for nrf51 (cortex-m0) in bluetooth le driver
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf5/sdk: Removing SDK dependant BLE UART Service implementation
The sdk_12.1.0 nrf52_ble.c implementation was dependent on SDK components.
This has been replaced with the ble_uart.c implementation using a standalone
bluetooth driver implementation without need of SDK components.
Also, sdk.mk has been updated to not use a special linker script.
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf52: Removing folder to not confuse which folder is in development
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf5/sdk: Removing ble_repl_linux.py
Script does not really work very well with blocking char read and
async ble notifications printing data when terminal stdout is blocked
by readchar. Bluetooth UART profile implemented in ble_uart.c is
now working with tralamazza's nus_console nodejs script.
Ref: https://github.com/tralamazza/nus_console
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf5: Add default config for MICROPY_PY_BLE_NUS (0)
Disable Bluetooth UART to be used for REPL by default. Can be overridden
in nrf5_sdk_conf.h. It is defined in mpconfigport.h as it is connected to
mphalport.c, where the config is used to determine whether default print
functions should be using HW UART or Bluetooth UART.
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf5/sdk: Add ble_uart.c to source list
ble_uart.c implements UART Bluetooth service on top of the
bluetooth stack driver api calls. Can be enabled to be compiled
in by defining MICROPY_PY_BLE_NUS = 1 in nrf5_sdk_conf.h.
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf5/sdk: Removing include of sdk_12.1.0's build.mk
As no sources are needed from the SDK this build makefile
can be deleted.
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf5: Force implementation of tx_str_cooked function if BLE NUS enabled.
If BLE UART service has been enabled, the mp_hal_stdout_tx_strn_cooked
is not defined by default anymore, and has to be implemented by the
UART driver (in this case BLE).
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf5/sdk: Adding compiler guard around exchange MTU request event.
As s110 is not having this event or function call to answer on a MTU
exchange request, this is excluded for all other version than s132
for now.
Bander Ajba <banderajba@macwan.local>
minor documentation and extra tabs removal fixes
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf5/sdk: Updating BLE UART implementation by swapping TX and RX uuid and characterisitic handling. Removed dummy write delay of 10 ms.
nrf5/sdk: Backing up progress in bluetooth le driver. Adding new gap and gatts handlers. Added handling of tx complete events when using notification, responding to MTU request, and setting of default connection parameters.
Bander Ajba <banderajba@macwan.local>
fixed temp module to allow for instance support
did required modification to merge the temperature sensore module
Dave Hylands <dhylands@gmail.com>
Fix up Makefile dependencies
I also didn't see any real reason for mkrules.mk to exist,
so I merged the contents into Makefile.
Now you can do:
```
make BOARD=pca10028 clean
make BOARD=pca10028 flash
```
and it will work properly.
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf5: Updating Makefile to use correct variable for setting directory of file to freeze as mpy.
nrf5: Setting stack top in main.c. Thanks dhylands for pointing this out.
nrf5/sdk: Backing up progress in BLE UART driver. Adding ringbuffer in order to poll bytes from recieved data in REPL main loop.
nrf5/modules: Updating ubluepy example to print out gatts write events with data.
nrf5/boards: Updating pca10028 bluetooth stack targets to have a MCU_SUB_VARIANT.
Bander Ajba <banderajba@macwan.local>
added support for hardware temperature sensor
Glenn Ruben Bakke <glennbakke@gmail.com>
nrf5/sdk: Adding macro based ringbuffer written by Philip Thrasher. source: https://github.com/pthrasher/c-generic-ring-buffer/blob/master/ringbuffer.h. Copyright noticed copied into the file, and file reviewed by Philip.
nrf5/sdk: Updating bluetooth le driver to extract data length and pointer from the event structure upon gatts write operation.
nrf5/modules: Expose ubluepy characteristic and peripheral types as external declaration in ublupy header.
nrf5: Updating main to initialize bluetooth le uart module right before bluetooth REPL is started.
nrf5/sdk: Updating bluetooth le uart implemenatation to block until cccd is written.
nrf5/sdk: Backing up ubluepy version of ble uart service for Bluetooth LE REPL.
nrf5/modules: Updating ubluepy example in header to align with bluetooth uart service characteristic's.
nrf5/modules: Implementing characteristic write method. Possible to use write for both write and notifications.
nrf5/sdk: Remaning bluetooth driver function ble_drv_attr_notif to *_notify.
nrf5/modules: Adding props and attrs parameter to ubluepy characteristic constructor to override default values. Adding method for reading characteristic properties. Adding values to the local dict table that gives possibility to OR together a configuration of properties and attributes in the keyword argument during construction.
nrf5/sdk: Adding parsing of characteristic properties and attributes (extra descriptions for the characteristic, for now cccd).
nrf5/modules: Adding new members to ubluepy characteristic object, props and attrs. Adding enum typedefs for various properties and attributes.
nrf5/modules: Syncing uart module code after upmerge with upstream master.
nrf5/boards: Releasing more RAM for heap use in the nrf51 s110 linker script.
nrf5/modules: Adding new gatts handler and registration of it during creation of a peripheral object. Also, added forwarding to python callback function (for now the same as for GAP).
nrf5/modules: Adding new callback type in modubluepy for gatts events.
nrf5/sdk: Adding support for setting gatts handler in the bluetooth le driver.
nrf5/modules: Adding constant for CCCD uuid in ubluepy constants dict.
nrf5: Adding ubluepy_descriptor.c into source list to compile.
nrf5/modules: Adding template for ubluepy descriptor class implementation.
nrf5/modules: Adding object structure for ubluepy descriptor.
nrf5/sdk: Adding template functions for attribute read/write/notify in bluetooth le driver.
nrf5/modules: Adding getCharacteristic method in ublupy service class. This function returns the characteristic with the given UUID if found, else None. The UUID parameter has to be of UUID class type, any other value, like strings will throw an exception.
nrf5/modules: Updating method documentation in ubluepy peripheral and service.
nrf5/modules: Adding new method, getCharacteristics(), in the ubluepy service class. The method returns the list of characteristics which has been added to the service instance.
nrf5/modules: Updating method documentation in ubluepy peripheral class.
nrf5/modules: Updating ubluepy service. Creating empty characteristic list in constructor. Appending characteristic to the list when added.
nrf5/modules: Changed return in ubluepy addService() function to return mp_const_none instead of boolean.
nrf5/modules: Correcting tabbing in ubluepy periheral impl.
nrf5/modules: Updating ubluepy peripheral. Creating empty service list in constructor. Appending services to the list when added. Added new function for retreiving the service list; getServices().
nrf5/modules: Adding new members in ubluepy peripheral and service object to keep track of child elements. Peripheral will have a list of services, and service will have a list of charactaristics.
nrf5/modules: Removing connection handle from python gap event handler callback function.
nrf5/modules: Updating ubluepy example in the header file with new function call to add service to a peripheral instance.
nrf5/modules: Updating peripheral class to assign periopheral parent pointer to service's thats added. Also added a hook in the bluetooth le event handler to store the connection handle value, to prevent any services or characteristics to handle this value themselves.
nrf5/modules: Updating service object to clear pointer to parent peripheral instance. Also assinging pointer to the service when adding a new characteristic.
nrf5/modules: Updating print to also include peripheral's connection handle. Setting pointer to service parent instance to NULL.
nrf5/modules: Correcting event id numbers for connect and disconnect event in ubluepy_constants.py
nrf5/modules: Shuffle order of typedef in ubluepy header. Adding service pointer in characteristic object. Adding peripheral pointer to the service structure. When populated, the characteristic would get access to conn_handle and service handle through pointers. Also service would get access to peripheral instance.
nrf5/modules: adding template functions for characteristic read and write.
nrf5/modules: Adding constants class to ubluepy which will contain easy access to common bluetooth le numbers and definitions for the bluetooth stack.
nrf5/modules: Updating example in ubluepy header with 16-bit uuid's commented out, to show usage.
nrf5/sdk: Adding support for adding 16-bit uuid's in advertisment packet. The services in paramter list can mix 16-bit and 128-bit.
nrf5/sdk: Updating sdk_common.mk with new filename of bluetooth le driver.
nrf5: Updating all includes of softdevice.h to ble_drv.h
nrf5/sdk: renaming softdevice.* to ble_drv.*
nrf5/sdk: Renaming bluetooth driver functions to have ble_drv* prefix. Updating modules using it.
nrf5/sdk: Enable ubluepy module if s110 bluetooth stack is enabled.
nrf5/sdk: Updating bluetooth driver to only set periph and central count if s132 bluetooth stack. These parameters does not exist in older stacks.
nrf5/modules: Updating bluetooth driver and ubluepy to use explicit gap event handler. Adding connection handle parameter to the gap handler from ubluepy. Resetting advertisment flag if connection event is recieved, in order to allow for subsequent advertisment if disconnected again. Example in ublupy header updated.
nrf5: Adding target to flash bluetooth stack when using pyocd-flashtool.
nrf5/modules: Guarding callback to python event handler before issue the call in case it is not set.
nrf5/modules: Updating ubluepy example to turn led2 on and off when receiving connected and disconnect bluetooth event.
nrf5/sdk: Updating bluetooth driver to have configurable logs.
nrf5/modules: updating ubluepy and bluetooth driver to support python created event handler. Added registration of callback from ubluepy against the bluetooth driver and dispatching of events to the user supplied python function.
nrf5/modules: Splitting includes to be inside or outside of the compile guard in ubluepy. This way, all micropython specific includes will be outside, and internal will be inside. This way, there will not be any dependency towards ubluepy headers if not compiled in.
nrf5/modules: Adding two new functions to ubluepy peripheral class to set specific handlers for notificaitons and connection related events.
nrf5: Set ubluepy to disabled by default in mpconfigport.h if not configured.
nrf5/modules: Moving includes inside config defines to make non-ubluepy targets compile again.
nrf5/modules: Adding 'withDelegate' function to peripheral class.
nrf5/modules: Adding ubluepy delegate type to modubluepy globals table.
nrf5: Adding ubluepy_delegate.c to list of source files to compile.
nrf5/modules: Adding new object struct for delegate class and adding a delegate struct member to Peripheral class to bookeep callback object when event occurs.
nrf5/modules: Adding template for ubluepy delegate class.
nrf5/sdk: Fixing debug print in bluetooth driver to not use >>> prefix. Adding one more print for connection parameter update.
nrf5/sdk: Correcting advertisment packet in bluetooth driver in order to make the device connectable.
nrf5/sdk: Implementing simple event handler for bluetooth stack driver.
nrf5/sdk: Disable all sdk components from being included in the build while implementing ubluepy, overlap in IRQ handler symbol.
nrf5/modules: Shortening down the device name to be advertised in the example to make it fit with a 128-bit complete UUID.
nrf5/modules: Bugfix in ubluepy_uuid_make_new. Used wrong buffer to register vendor specific uuid to the bluetooth stack.
nrf5/sdk: Updating advertisment function in bluetooth le driver to add 128-bit complete service UUID provided in service list to the advertisment packet.
nrf5/sdk: Updating advertisment funciton in bluetooth le driver to iterate through services passed in and calculate individiual uuid sizes.
nrf5/modules: Updating advertisment method in peripheral class to memset advertisment structure. Also applying service list if set to the advertisment structure.
nrf5/modules: Updating ubluepy module header usage example. Correcting enum for UUID types to start index from 1. Expanding advertisment data structure to also include service list members.
nrf5/sdk: Adding static boolean for keeping track of whether advertisment is in progress in the bluetooth driver. Now, advertisment can be restarted with new data any time.
nrf5/modules: Updating ubluepy peripheral class to use mp_const_none instead of MP_OBJ_NULL for unset values in advertisment method parameter list. Adding extraction of the service list in the advertisment method. The list is not yet handled.
nrf5/modules: Adding a few examples in the modubluepy.h to get easier copy paste when implementing.
nrf5/sdk: Successful device name advertisment. Added flags to advertisment packet and enable device name byte copy into the advertisment data.
nrf5/modules: Turning ubluepy peripheral advertisment function into a keyword argument function so that it would be possible to set device name, service uuids, or manually constructed data payload.
nrf5/sdk: Updating softdevice driver with function to set advertisment data and start advertisment. Does not apply device name yet. Work in progress.
nrf5/modules: Adding new structure to ubluepy in order to pass advertisment data information to the bluetooth le stack.
nrf5/modules: Adding function function to add characteristics to the ubluepy service. Enable function in service's local dict table.
nrf5/modules: Adding function in bluetooth le driver to add characteristic to the bluetooth le stack.
nrf5/modules: Adding more members to ublue characteristic object structure.
nrf5/modules: Adding characteristic class to ubluepy globals table.
nrf5/modules: Updating ubluepy characteristic implementation.
nrf5/modules: Re-arranging includes in ubluepy_service.c
nrf5/modules: Adding ubluepy charactaristic type struct.
nrf5/modules: Updating ubluepy with more implementation in UUID and Service. Adding function in bluetooth le driver which adds services to the bluetooth stack. Making service take UUID object and Service type (primary/secondary) as constructor parameter in Service class.
nrf5: Adding ubluepy to include path.
nrf5/modules: Updating ubluepy UUID class constructor with some naive parsing of 128-bit UUIDs, and pass this to the softdevice driver for registration.
nrf5/sdk: Adding new function to the softdevice handler driver to add vendor specific uuids and return an index to the entry back by reference.
nrf5/modules: Updating ubluepy UUID class with constructor that can construct an object based on hex value of 16-bit or string of 16-bit prefixed with '0x'.
nrf5/modules: Adding Peripheral, Service and UUID class to the ubluepy module globals table.
nrf5/modules: Extending the implementation of Peripheral class in ubluepy.
nrf5/modules: Extending the implementation of UUID class in ubluepy.
nrf5/sdk: Adding configuration to enable the ubluepy peripheral class when using softdevice 132 from the SDK.
nrf5: Adding ubluepy module to builtins if bluetooth stack is selected. Disable NUS profile by default. Adding source for ubluepy module into makefile to be included in build. The source is only linked if MICROPY_PY_UBLUEPY is set.
nrf5: Aligning code after upmerge with master. Mostly FAT FS related updates. Not tested after merge.
nrf5/modules: Adding new and print function to ubluepy peripheral class. Template functions only.
nrf5/modules: Adding ubluepy UUID class template.
nrf5/modules: Adding ubluepy characteristic class template.
nrf5/modules: Adding missing #endif. Also adding to property templates to the lolcal dict.
nrf5/modules: Adding ubluepy service class template.
nrf5/modules: Updating ubluepy with class function placeholders.
nrf5/modules: Renaming ble module folder to ubluepy.
nrf5/modules: Adding new template file for ubluepy Peripheral class.
nrf5/pyb: Moving pyb module into modules/pyb.
nrf5/utime: Moving utime module into modules/utime.
nrf5/uos: Moving uos module into modules/uos.
nrf5/network: Moving network module into modules/network. Adding include path to network as its needed by the usocket module.
nrf5/usocket: Moving usocket module into modules/usocket.
nrf5/led: Moving led module into modules/machine.
nrf5/led: Moving led module into modules/machine.
nrf5/pwm: Moving pwm module into modules/machine.
nrf5/rtc: Moving rtc module into modules/machine.
nrf5/timer: Moving timer module into modules/machine.
nrf5/pin: Moving pin module into modules/machine.
nrf5/adc: Moving adc module into modules/machine.
nrf5/i2c: Moving i2c module into modules/machine.
nrf5/spi: Moving spi module into modules/machine.
nrf5/uart: Moving uart module into modules/machine to start converting it into machine module and not pyb.
nrf5/machine: Moving modmachine into modules/machine folder. Updating Makefile.
nrf5/drivers: Renaming folder to modules.
nrf5: Renaming python modules folder to freeze to give the folder its right meaning. The scripts put into this folder will be frozen.
nrf5/drivers: Adding template for ubluepy module.
nrf5/sdk: Adding compilation config whether to include BLE NUS implementation. Config found in sdk/nrf5_sdk_conf.h. NUS enabled for s132 targets by default.
nrf5: Fallback to HW UART when not Bluetooth LE UART has been enabled.
nrf5: Updating main.c to use MICROPY_PY_BLE_NUS as switch for regular uart initialization or bluetooth le uart initialization.
nrf5/sdk: Adding work-in-progress script to connect to bluetooth le REPL using bluepy python module in linux.
nrf5/boards: Updating board makefiles for s132 and s1xx target for pca10040 (nrf52832) by adding sub variant and device define to the makefiles.
nrf5/examples: Updating ssd1306.py example with a comment describing proceedure on how to use the I2C variant of the driver.
nrf5/hal: Line wrapping params in hal_spi.c to make it easier to read.
nrf5/hal: Updating hal_twi.c tx implementation to a working state. STARTTX only issued once, before looping bytes.
nrf5/examples: Updating ssd1306.py driver to work with i2c master write implementation.
nrf5/hal: Updating hal_twi.c with tx function. Gets multiple startup bytes for each clocked byte.
nrf5/hal: Updating hal_twi.c with tx function which partly works. Bytes are clocked out a bit out of order.
nrf5/hal: Started implementation of hal_twi.c (non-DMA). Init function started.
nrf5: Removing hal_twie.c from being compiled in.
nrf5: Renaming configuration define in board configs using i2c from MICROPY_PY_MACHINE_HW_I2C to MICROPY_PY_MACHINE_I2C as the config is overlapping with the latter.
nrf5: Renaming configuration define in board configs using i2c from MICROPY_PY_MACHINE_HW_I2C to MICROPY_PY_MACHINE_I2C as the config is overlapping with the latter.
nrf5: Making i2c configurable from board configuration in case board has to sacrifice the i2c machine module.
nrf5/boards: Activating all display drivers in pca10056 board.
nrf5/boards: Updating s110 SD linker script for micro:bit.
nrf5/i2c: Making use of hal twi tx function in writeto function.
nrf5/hal: Updating twi driver with template functions.
nrf5/hal: Updating TWI DMA implementation. Suspend not working on tx. Rx not implemented yet.
nrf5/hal: Updating twi master tx with stop parameter.
nrf5/hal: Adding i2c master functions for tx and rx in hal header.
nrf5/hal: Adding new macros functions to mphalport.h which are used by extmod i2c machine module.
nrf5/i2c: Adopting use of extmod/machine_i2c module as base for port's machine i2c module.
nrf5/i2c: Backing up before trying out extmod i2c integration.
nrf5: Adding i2c class to machine module globals table.
nrf5: Updating main.c to initialize the i2c machine module if selected.
nrf5/i2c: Updating i2c machine module with new constructor parameters to set scl and sda pins. Also updating print funciton to debug pin number and port number for the gpio set.
nrf5/i2c: Updating i2c module to new new hal api, as master is initialized with its own init function.
nrf5/hal: Adding members to TWI config struct, device address and scl/sda pin. Renaming and adding function such that twi slave and master has seperate init function. Started implementation of master init function for nrf52 using DMA (hal_twie.c).
nrf5/i2c: Updating module to use new struct layout from hal_twi.h
nrf5/hal: Updating TWI with frequency enums.
nrf5/examples: Updating game file to use ssd1305 display driver.
nrf5/drivers: Updating examples in comment in oled ssd1305 object to use the draw module.
nrf5/hal: Fixing nrf51 SPI pin configuration to use pin member of struct.
nrf5/boards: Updating boards to comply to new style of configuring pins for uart and spi.
nrf5/boards: Updating board configuration for pca10056 (nrf52840) with new pin configuration scheme for SPI and UART.
nrf5/hal: Updating hal QSPI header with define guard to filter out usage of undefined structures and names when compiling against non-52840 targets.
nrf5/drivers: Updating display objects to use new SPI pin configuration in print function.
nrf5/hal: Updating SPI DMA variant with more frequencies, and allowing rx and tx buffers to be NULL.
nrf5/uart: Updating uart module to use new config hal config structure members for pins. Changing board config provided pins to use const pointers from generated pins instead of pin name.
nrf5/hal: Updating uart hal to use pointers to Pin objects instead of uint pin and port number.
nrf5/hal: Updating uart hal to use pointers to Pin objects instead of uint pin and port number.
nrf5: Updating modmachine to add SPI in globals dict when MICROPY_PY_MACHINE_HW_SPI define is set. This diverge from regular MICROPY_PY_MACHINE_SPI config. Fixes missing SPI in the machine module after renaming port SPI enable define.
nrf5: Updating main.c to enable SPI if MICROPY_PY_MACHINE_HW_SPI is set. This diverge from regular MICROPY_PY_MACHINE_SPI config. Fixing missing init of SPI after renaming port SPI enable define.
nrf5/spi: Adding multiple instances of machine SPI depending on which chip is targeted (nrf51/nrf52832/nrf52540). Updating board config requirement to give variable name of const pointer to Pin instead of a Pin name. Adding support of giving keyword set mosi/miso/clk pin through constructor.
nrf5/hal: Updating SPI hal with full list of SPI interfaces as lookup tables for all devices. Updating init struct to pass Pin instance pointers instead of uint pin number and ports.
nrf5/drivers: Activate ssd1289 object in the display module.
nrf5/boards: Adding ssd1289 lcd module in pca10040 (nrf52832) board.
nrf5: Adding ssd1289 driver and python module into build.
nrf5/drivers: Adding ssd1289 lcd tft driver and python module.
nrf5/hal: Fixing compile issues in quad SPI driver.
nrf5/hal: Updating Quad SPI hal driver.
nrf5/hal: Aligning assignment in hal_adc.c
nrf5/hal: Adding more types to quad SPI header.
nrf5: Syncing code after upmerge with master.
nrf5/hal: Updating clock frequency enums and lookup table for quad spi.
nrf5/hal: Adding QSPI base and IRQ num in c-file.
nrf5/hal: Adding hal template files for 32mhz Quad SPI peripheral.
nrf5/drivers: Optimizing update_line in ili9341 driver a bit.
nrf5/drivers: Adding space in macro.
nrf5/drivers: Adding rgb16.h with macro to convert 5-6-5 rgb values into a 16-bit value.
nrf5: Adding configuration defines for SSD1289 lcd driver.
nrf5: Removing old framebuffer implementation.
nrf5: Remove old framebuffer implementation from being included into the build.
nrf5/drivers: Enable framebuffer and graphics module to be compiled in by default if display is selected into the compilation.
nrf5/drivers: Updating epaper driver sld00200p to use new framebuffer.
nrf5/drivers: Removing debug printf's from epaper display python module.
nrf5/drivers: Updating python example in comment for ls0xxb7dxx display module.
nrf5/boards: Enable LS0XXB7DXXX display module in pca10056 board config.
nrf5/drivers: Adding ls0xxb7dxx to display module.
nrf5: Adding ssd1305 and ls0xxb7dxxx (sharp memory display) drivers to be included in build.
nrf5/drivers: Updating sharp memory display driver and python module to a working state.
nrf5/spi: Adding posibility to configure SPI firstbit mode to LSB or MSB. Default is MSB. Updating python module and hal driver.
nrf5/drivers: Tuning memory lcd driver a bit. Fixing small mp_printf usage bug.
nrf5/drivers: Adding sharp memory display driver. For now hardcoded to 2.7 inch variant.
nrf5: Adding configuration define for sharp memory display series in mpconfigport.h preparing for driver to be included.
nrf5/boards: Enable ssd1305 oled display to be default for pca10028 for now.
nrf5/drivers: Adding ssd1305 oled driver. This is very similar to ssd1306, so a merge will happen soon.
nrf5/drivers: Adding ssd1305 oled driver. This is very similar to ssd1306, so a merge will happen soon.
nrf5/drivers: Updating ili9341 display object to use new framebuffer.
nrf5/drivers: Updating ili9341 driver to use new framebuffer, and removing the compressed param from the line update function.
nrf5: Adding micropython mem_info() to be included in mpconfigport.h.
nrf5/drivers: Adding example in comment on how to use the ili9341 driver with nrf51/pca10028 board.
nrf5/examples: Adding a extra global variable to the game which breaks the game execution.
nrf5/examples: Adding 2048 game using OLED SSD1306 128x64 display and analog joystick.
nrf52/boards: Increasing the stack and heap in pca10056 (nrf52840) target from 2k/32k to 40k/128k to debug some buffer problems when running large frozen python programs.
nrf51/boards: Increasing heap and stack size in the pca10028 board.
nrf51/boards: Enable display driver and oled ssd1306 (also bringing in framebuffer and graphics module) into the pca10028 target.
nrf5: Enable display/framebuffer.c and graphic/draw.c into the build.
nrf5/drivers: Adding defines to exclude implementation of draw.c module if not enabled.
nrf5: Adding configuration defines for the graphics module (draw) and enabling this by default if using oled ssd1306 display which has a compatible python object definition.
nrf5/drivers: Adding draw module with circle, rectangle and text functions. Can be used by any display object which implements display callback functions.
nrf5/drivers: Moving oled ssd1306 driver over to new framebuffer layout. Moving some of the draw algorithms into the object in order to optimize the speed on writing data from the framebuffer.
nrf5/hal: Removing stdio.h include in adce.c which were used for debugging.
nrf5/boards: Adding ADC pins in pins.csv file for pca10056 (nrf52840).
nrf52/hal: Adding adce (saadc) implementation for nrf52 to sample values on a channel.
nrf5/adc: Adding all 8 instances to adc python module. Valid for both nrf51 and nrf52.
nrf5/drivers: Adding new structures to moddisplay. Adding a display_t structure to cast all other displays into, to retrieve function pointer table of a display object type. Also adding the function table structure which needs to be filled by any display object.
nrf5/drivers: Adding a new framebuffer implementation to replace the mono_fb.
nrf5/boards: Updating pca10028 (nrf51) board config. Enable SPI machine module. Enable flow control on UART. Correcting SPI CLK, MISO and MOSI pin assignments.
nrf5/adc: Updating adc module and hal with a new interface. No need for keeping peripheral base address in structure when there is only one peripheral (nrf51).
nrf5/rtc: Correcting RTC1 base error in rtc template.
nrf5: Adding adc module to machine module.
nrf5/hal: Updating hal_adc* with more api functions.
nrf5/adc: Adding updated adc module.
nrf5/boards: Enabling ADCE (SAADC) variant of adc hal to match hardware on nrf52 series.
nrf5/boards: Adding ADC config to pca10028 pins.csv
nrf5/boards: Tuning linker script for nrf51822_ac to get some more heap.
nrf5: Updating nrf51_af.csv to reflect pins having ADC on the chip.
nrf5/boards: Updating make-pins.py to generate ADC pin settings from board pins.csv.
nrf5/hal: Updating hal_adc header to use correct Type for ADC on nrf52.
nrf5/adc: Updating module to compile.
nrf5/boards: Enable ADC machine module for pca10028, pca10040 and pca10056.
nrf5: Add add ADC machine module into build.
nrf5: Adding new config for ADC module in mpconfigport.h.
nrf5/adc: Adding ADC machine module base files. Implementation missing.
nrf5: Adding hal_adc* into build.
nrf5/boards: Enable ADC/SAADC hal for pca10028 (nrf51), pca10040 (nrf52832) and pca10056 (nrf52840) boards.
nrf5/hal: Removing chip variant guard for hal_adc*, and let this be up to the hal conf file to not mess up at the moment.
nrf5: Add i2c.c, i2c machine module, and hal_twi into build.
nrf5/boards: Enable hardware I2C machine module for pca10028 (nrf51), pca10040 (nrf52832) and pca10056 (nrf52840) boards.
nrf5/boards: Enable TWI hal for pca10028 (nrf51), pca10040 (nrf52832) and pca10056 (nrf52840) boards.
nrf5/i2c: Adding files for hardware i2c machine module and adding config param in mpconfigport to disable by default.
nrf5/hal: Adding template files for TWI (i2c) hal.
nrf5/hal: Adding template files for ADC hal.
nrf5/drivers: Correcting tabbing in oled ssd1306 c-module.
nrf5/boards: Enable SSD1306 spi driver for pca10040 (nrf52832) and pca10056 (nrf52840) boards.
nrf5/drivers: Adding SSD1306 SPI display driver. Not complete, but can do fill screen operation atm.
nrf5/drivers: Adding epaper display example script in comment for pca10056 / nrf52840 in the display module.
nrf5/boards: Enable PWM module and epaper display module in pca10056 board config.
nrf5/drivers: Adding some more delay on bootup to ensure display recovers after reset.
nrf5/examples: Adding copy of ssd1306.py driver hardcoded with SPI and Pin assignments.
nrf5/drivers: Updating ili9341 driver to set CS high after cmd or data write.
nrf5/drivers: Extending print function for ili9341 object to also print out gpio port of the SPI pins.
nrf5/boards: Giving a bit more heap for nrf52840 linker script.
nrf5/drivers: bugfix of the sld00200p driver. Stopping the pwm instead of restarting it. Shuffle placement of static function.
nrf5/drivers: Correcting object print function to also include port number of the SPI pins. Correcting usage script example in comment.
nrf5/drivers: Adding an initial script as comment for ili9341 on nrf52840/pca10056 in the driver module comment.
nrf5/examples: Removing tabs from epaper python script usage comment, so that it is easier to copy paste.
nrf5/hal: Refining if-defs to set up GPIO base pointers in mphalport.h
nrf5/devices: Removing define which clutters ported modules from nrf.h.
nrf5/boards: Enabling spi in pca10056 hal config.
nrf5/boards: Enabling ili9341 display drivers and to be compiled in on pca10056 target board. Updating SPI configuration with gpio port.
nrf5/boards: Enabling display drivers/spi/pwm to be compiled in on pca10040 target board. Updating SPI configuration with gpio port.
nrf5/hal: Correcting SPI psel port position define name to the one defined in nrf52840_bitfields.h
nrf5/led: Hardcoding GPIO port 0 for Led module for now.
nrf5/hal: Changing import of nrf52 includes in hal_uarte.c to not be explicit. Now only nrf.h is included.
nrf5: Updating pin, spi and uart to use port configuration for gpio pins. Update pin generation script, macros for PIN generation. Updating macros for setting pin values adding new port parameter to select the correct GPIO peripheral port.
nrf5/boards: Disable SPI hal from pca10001 board.
nrf5/boards: Disable SPI/Timer/RTC hal from microbit board.
nrf5: Exclude import of pwm.h in modmachine.c if MICROPY_PY_MACHINE_PWM is not set, as nrf51 does not yet have this module yet.
nrf5: Exclude import of pwm.h in main.c if MICROPY_PY_MACHINE_PWM is not set, as nrf51 does not yet have this module yet.
nrf5/drivers: Block nrf51 from compiling epaper_sld00200p for the moment. There is no soft-pwm present yet, and including pwm would just make compilation fail now.
nrf5/hal: Making nrf51/2_hal.h go trough nrf.h to find bitfields and other mcu headers instead of explicit include.
nrf5/boards: Adding more pins to nrf52840 / pca10056 target board.
nrf5/pin: Adding more pins to nrf52_af.csv file for nrf52840. Port '1' will be prefixed 'B'.
nrf5/pin: Adding PORT_B to Pin port enum to reflect gpio port 1 on nrf52840.
nrf5/boards: Updating all board configs with gpio port configuration for uart/spi pins. Leds still not defined by gpio port.
nrf5/devices: Updating header files for nrf51 and nrf52. Adding headers for nrf52840.
nrf5: Updating to use new nrfjprog in makefile. Needed for nrf52840 targets. Changed from pinreset to debug reset.
nrf5/boards: Updating makefiles to use system.c files based on sub-variant of mcu.
nrf5/devices: Renaming system.c files for nrf51 and nrf52 to be more explicit on which version of chip they are referring to.
nrf5/drivers: Backing up working epaper display (sld00200p shield) driver before refactoring.
nrf5/drivers: Fixing parenthesis in ILI9341 __str__ print function.
nrf5/pwm: Moving out object types to header file so that it can be resused by other modules.
nrf5/drivers: Updating a working version of ili9341 module and driver. About 10 times faster than python implementation to update a full screen.
nrf5: Started to split up lcd_mono_fb such that it can be used as a c-library and python module with the same implementaton.
nrf5/hal: Adding include of stdbool.h in hal_spi.h as it is used by the header.
nrf5/drivers: Adding preliminary file for ili9341 lcd driver.
nrf5/hal: Adding support for NULL pointer to be set if no rx buffer is of interest in SPI rx_tx function.
nrf5: Adding ili9341 class and driver files in Makefile to be included in build.
nrf5/drivers: Adding template files for upcomming ili9341 driver.
nrf5/drivers: Adding lcd ili9341 object implementation to make a new instance. print implemented for debugging pins assigned to the display driver. No interaction yet with the hal driver.
nrf5/drivers: Adding ILI9341 class to the display global dict.
nrf5/boards: Changing tft lcd display name from SLD10261P to ILI9341 in pca10040 board configuration.
nrf5: Moving out mp_obj_framebuf_t to the header file to get access to it from other modules. Exposing helper function to make new framebuffer object from c-code.
nrf5: Trimming down display configurations in mpconfigport.h
nrf5/spi: Moving *_spi_obj_t out of implementation file to header. Setting hal init structure in the object structure instead of making a temp struct to configure hal. This would enable lookup of the spi settings later.
nrf5: Removing epaper, lcd and oled modules from Makefile source list as the display modules has been moved to display root folder.
nrf5/drivers: Removing one level of module hierarchy in display drivers. Removed epaper, lcd and oled modules, making import of classes happen directly from display module.
nrf5/drivers: Creating python object implementation (locals) to be used for epaper sld00200p.
nrf5: Moving color defines in lcd_mono_fb from .c to .h so that it can be reused by other modules.
nrf5: Enable MICROPY_FINALISER and REPL_AUTO_INDENT.
nrf5/drivers: Adding requirement for nrf52 target on the epaper sld00200p for now. There is no ported PWM module for nrf51 target yet. Hence, soft PWM for nrf51 needs to be added.
nrf5: Adding suffix to _obj on epaper_sld00200p module.
nrf5: Correcting define name for epaper sld00200p, missing 0.
nrf5/drivers: Enable EPAPER_SLD00200P in epaper module globals table.
nrf5/drivers: Adding missing file for epaper module / driver.
nrf5/modules: Moving python scripts to examples folder to free up some flash space on constrained targets as modules folder is used as frozen files folder.
nrf5/boards: Enable display module to be built in. Also adding one epaper display and one tft lcd to test display module when porting the corresponding drivers to micropython.
nrf5/drivers: Removing external decleration of display module in header.
nrf5/drivers: Renaming display module to mp_module prefix as it is going to be inbuilt. ifdef'ing all submodules based on type of display configured through mpconfigport.h
nrf5/drivers: Adding ifdef sourrounding the implementation of module. Configurable with mpconfigport.h.
nrf5: Adding display module to port builtins.
nrf5/drivers: Adding driver files to makefile. Implicitly adding display module.
nrf5/drivers: Adding template for c-implementation of lcd, epaper and oled drivers as a display module.
nrf5/modules: Updating to correct name of display in epaper driver.
nrf5/modules: Adding python epaper display driver. Currently colors have been reversed.
nrf5/hal: Fixing bug in mp_hal_pin_read in mphalport.h which tried to read an OUT register. Corrected to read the IN register.
nrf5: Adding sleep_us to modutime.c and exposing mp_hal_delay_us in hal/hal_time.h
nrf5/lcd: Updating framebuffer with double buffer for epaper displays. Moving statics into instance struct. Adding new function to refresh using old buffer, such that epaper can get a cleaner image after update.
nrf5/boards: Adding initial microbit build files and board configurations.
nrf5: Makefile option to set FLASHER when doing flash target. If defined in board .mk file, this will be used, else nrfjprog will be used by default (segger). This opens up for using pyocd flashtool and still run 'make flash'.
nrf5/boards: Updating pca10028 board config to not define RTS/CTS pins when HWFC is set to 0.
nrf5/uart: Making compile time exclusion of RTS/CTS if not defined to use flow control by board configuration.
nrf5/spi: Removing automatic chip select (NSS) in hal_spi.c. Also removing configuration of this pin as it is confusing to pass it if not used. User of SPI has to set the NSS/CS itself.
nrf5/modules: Updating PWM test python script to cope with new api.
nrf5/hal: Fixing some issues in PWM stop function. Doing a proper stop and disable the peripheral.
nrf5/pwm: Implementing start and stop call to hal on init and deinit as hal_init does not longer start the PWM automatically.
nrf5/hal: Exposing two new PWM hal functions start() and stop().
nrf5/hal: Moving enablement of PWM task from init to a start function. Also activating code in stop function to stop the PWM.
nrf5/modules: Adding licence text on seeedstudio tft shield python modules.
nrf52/boards: Tuning linker script for nrf52832 when using iot softdevice. Need more heap for LCD framebuffer.
nrf5/lcd: Adding lcd_mono_fb.c to source list in the makefile. Adding define in implementation to de-select the file from being included. Adding module to PORT BUILTIN in mpconfigport.h
nrf52/sdk: Correcting path to iot softdevice if SDK is enabled.
nrf5: Adding help text for CTRL-D (soft reset) and and CTRL-E (paste mode) in help.c
nrf5: Adding handling of CTRL+D to reset chip in main.c. Call to NVIC System Reset is issued.
nrf5/lcd: Correcting indention (tabs with space) in framebuffer module source and header.
nrf5/lcd: Changing framebuffer to use petme128 8x8 font. This is vertical font. Code modified to flip and mirror the font when rendering a character. Adding copy of the font from stmhal.
nrf5/modules: Adding new driver for seeedstudio tft shield v2, using new framebuffer module which handles faster update on single lines, callback driven write on each line which is touched in the framebuffer.
nrf5/lcd: Adding header file for lcd_mono_fb.
nrf5/lcd: Updating brackets in framebuffer module.
nrf5/lcd: Renaming variable name from m_ to p_
nrf5/lcd: Cleaning up a bit in lcd framebuffer.
nrf5/lcd: Adding work in progress monochrome lcd framebuffer driver which only updates modified (dirty) display lines.
nrf5/modules: Updating pulse test to set output direction on the LED pin used in the test.
nrf5/modules: Updating seeedstudio tft lcd driver to render using already existing framebuffer implementation.
nrf5/boards: Bouncing up heap to 32k on pca10040 to allow for application to allocate 9600bytes+ framebuffer when using LCD screen (240x320).
nrf5/modules: Adding a function to get access to the SD card flash drive on the seeedstudio tft shield.
nrf5/modules: Adding new python script to initialize and clear the display on Seeedstudio 2.8 TFT Touch Shield v2.
nrf5/modules: Updating documentation on sdcard.py copy to use new params in the example description
nrf5/modules: Updating mountsd, SD card test script with new params.
nrf5/pin: Merging input and output pin configuration to one comon function. Adding implementation in Pin class to be able to configure mode and pull. Updating drivers which uses gpio pin configuration to use new function parameters.
nrf5: Adding rtc.c which implements the machine rtc module to be included in build.
nrf5/boards: Enable MICROPY_PY_MACHINE_RTC in pca10028 (nrf51) and pca10040 (nrf52) targets.
nrf5/hal: Adding empty init function in hal_rtc.c
nrf5/hal: Adding structures and init function prototype to hal_rtc.h.
nrf5: Setting MICROPY_PY_MACHINE_RTC to disabled by default (during development) in mpconfigport.h. This can be overriden by board config.
nrf5/rtc: Adding skeleton for machine rtc module for nrf51/52.
nrf5: Adding timer.c which implements the machine timer module to be included in build.
nrf5: Setting MICROPY_PY_MACHINE_TIMER to disabled by default (during development) in mpconfigport.h. This can be overriden by board config.
nrf5/boards: Enable MICROPY_PY_MACHINE_TIMER in pca10028 (nrf51) and pca10040 (nrf52) targets.
nrf5: Adding initialization of timer module if enabled by MICROPY_PY_MACHINE_TIMER.
nrf5/timer: Adding initializaton of id field for Timer_HandleTypeDef's. Adding simple print function. Adding make_new function. Enabling the functions in machine_timer_type.
nrf5/hal: Adding empty init function in hal_timer.c
nrf5/hal: Adding structures and init function prototype to hal_timer.h.
nrf5/timer: Adding skeleton for machine timer module for nrf51/52.
nrf/boards: Adding RTC and TIMER hal to be linked in when implemented. Enable one board for nrf51 and one for nrf52 for ease of debugging when implementing the hal.
nrf5: Adding rtc and timer hal to Makefile.
nrf5/hal: Adding skeleton files for rtc and timer driver.
nrf5/modules: Updating pulse example to work with Pin object instead of hard coded pin number.
nrf5/pwm: Switching from hardcoded pin number to Pin object type as input to the new() function. Also changing the parameter from kw to arg.
nrf5/modules: updating test python file with correct PWM frequency type.
nrf5/modules: Adding a python test file with function to dim a specific led (17).
nrf5/pwm: Updating pwm module with freq function which re-initilises the PWM instance such that new frequency will be applied.
nrf5/pwm: Initializing pwm instances in main.c if enabled by MICROPY_PY_MACHINE_PWM.
nrf5/pwm: Adding api to initialize pwm instances.
nrf5: Updating mpconfigport.h to set a default for PWM machine module to be enabled by default, if not disabled in a board config. Refactoring order in the file.
nrf52: Set names to be used on PWM0-2 in board config. For nrf52840, the PWM3 is excluded as repo does not have latest headers to reflect this yet. Bump up to be done soon.
nrf52: Enable PWM HAL for both pca10040 (nrf52832) and pca10056 (nrf52840).
nrf51: Disable MICROPY_PY_MACHINE_PWM for now in all nrf51 target boards as sw impl. is not yet included in the repo.
nrf5: Only enable hal_pwm.c if nrf52 target as nrf51 must have a sw implementation.
nrf5/pwm: Adding pwm to modmachine.c
nrf5/hal: Updating PWM header file with init function prototype. Also added PWM_HandleTypeDef structure that can be used in the pwm python module.
nrf5/pwm: Updating PWM dict table to have freq and duty function. Also added creation of default objects based on PWM name set in board config. Adding ifdef surrounding the import of hal_pwm.h as this module might be used by software implmentation of PWM later.
nrf5/pwm: Removing include of hal_pwm.h as pwm.c might not use a hal, but sw implementation.
nrf5: Updating makefile to compile in pwm.c and hal_pwm.c
nrf5/boards: Adding config flag for HAL_PWM in pca10040 and pca10056.
nrf5: Adding pwm work in progress machine PWM module.
nrf5/hal: Starting implementation of PWM hal to be used by PWM python module later.
nrf5: Adding initial board files for pca10056. The files are not complete (only 32 pins are added for now). UART REPL, leds, and Pins (up to 31) are functional.
nrf5: Updating comment in linker script for nrf52832 and nrf52840 to distinguish between the two nrf52 variants.
nrf5: Adding new linker script for nrf52840.
nrf5: updating flash size comment in nrf52832 linker script.
lib/netutils: Adding some basic parsing and formating of ipv6 address strings. Only working with full length ipv6 strings. Short forms not supported at the moment (for example FE80::1, needs to be expressed as FE80:0000:0000:0000:0000:0000:0000:0001).
nrf5: Updating port with new content. SPI, SDcard (trough sdcard.py), Pin, and machine module. Also adding some basic modules depending on SDK and bluetooth stack from nordic semiconductor. NUS is module copied from original port by tralamazza, and new basic module for 6lowpan over BLE which can be used by modnetwork and modusocket. Basic BLE module to enable bluetooth stack and start a eddystone advertisment is kept, and still works without SDK, even if in the SDK folder (its placed there as it needs bluetooth stack from an SDK).
Renaming softdevice folder to sdk.
Removing unused 'NRF_SOFTDEVICE' compile variable from all board .mk softdevice targets.
Fixing main Makefile CFLAGS concatination error when setting softdevice param
Daniel Tralamazza <daniel@tralamazza.com>
ignore default build folders
move softdevice (SD) specific code from the main Makefile to their respective board/SD makefiles
Glenn Ruben Bakke <glennbakke@gmail.com>
Updating Makefile by removing unwanted LDFLAG setting cpu to cortex-m0 in all cases.
Updating modble.c method doc of address_print() to reflect the actual function name.
Base support for nrf51 and nrf52 base without depending on SDK. SoftDevice usage optional.
Daniel Tralamazza <daniel@tralamazza.com>
remove dup declaration mp_builtin_open_obj
init
Date of "init" commit: Wed Jun 22 22:34:11 2016 +0200
The UART.init() method is now included unconditionally and its wording
adjusted to better describe ports other than the cc3200.
UART.irq() is also included unconditionally, but this is currently only
available on the WiPy target.
By virtue of its name, the pyb module would only be available on a pyboard
and so does not need to have conditional "only" directives throughout its
documentation.
These conditionals were added mostly in
cfcf47c064 in the initial development of the
cc3200 port, which had the pyb module before it switched to the machine
module. And wipy only conditionals were removed from the pyb module
documentation in 4542643025, so there's no
need to retain any more conditionals.
The period of the timer can now be specified using the "period" and
"tick_hz" args. The period in seconds will be: period/tick_hz. tick_hz
defaults to 1000, so if period is specified on its own then it will be in
units of milliseconds.
machine.Timer now takes a new argument in its constructor (or init method):
tick_hz which specified the units for the period argument. The period of
the timer in seconds is: period/tick_hz.
For backwards compatibility tick_hz defaults to 1000. If the user wants to
specify the period (numerator) in microseconds then tick_hz can be set to
1000000. The user can also specify a period of an arbitrary number of
cycles of an arbitrary frequency using these two arguments.
An additional freq argument has been added to allow frequencies to be
specified directly in Hertz. This supports floating point values when
available.
Using direct register control as specified by ESP-IDF in
components/esp32/test/test_tsens.c. Temperature doesn't represent any
particular unit, isn't calibrated and will vary from device to device.
There's no need to call mp_obj_new_int() which will just fail the check for
small int and call mp_obj_new_int_from_ll() anyway.
Thanks to @Jongy for prompting this change.
Prior to this patch, get_fattime() was calling a HAL RTC function with the
HW instance pointer as null because rtc_init_start() was never called.
Also marked it as a weak function, to allow a board to override it.
In non-debug mode MP_OBJ_STOP_ITERATION is zero and comparing something to
zero can be done more efficiently in assembler than comparing to a non-zero
value.
With the recent change b488a4a848, a
generating function now has the same layout in memory as a normal bytecode
function, and so can reuse the latter's attribute accessor code to
implement __name__.
mpy-cross doesn't depend on any code in the extmod directory so completely
exclude it from the build (extmod may still be scanned for qstrs but that
is controlled by py/py.mk). This speeds up the build a little, and
improves abstraction of this component.
Also, make -I$(BUILD) take precedence over -I$(TOP) in case there are stray
files in the root directory that would be picked up.
With this and previous patches the stm32 port can now be compiled using
object representation D (nan boxing). Note that native code and frozen mpy
files with float constants are currently not supported with this object
representation.
Because this function is simple it saves code size to have it inlined.
Being an auxiliary helper function (and only used in the py/ core) the
argument should always be an mp_obj_module_t*, so there's no need for the
assert (and having it would require including assert.h in obj.h).
It's a very simple function and saves code, and improves efficiency, by
being inline. Note that this is an auxiliary helper function and so
doesn't need mp_check_self -- that's used for functions that can be
accessed directly from Python code (eg from a method table).
mp_obj_module_get_globals() returns a mp_obj_dict_t*, and type->locals_dict
is a mp_obj_dict_t*, so access the map entry of the dict directly instead
of needing to cast this mp_obj_dict_t* up to an object and then calling the
mp_obj_dict_get_map() helper function.
Printing debugging info by defining MICROPY_DEBUG_VERBOSE expects
a definition of the DEBUG_printf function which is readily available
in printf.c so include that file in the build. Before this patch
one would have to manually provide such definition which is tedious.
For the msvc port disable MICROPY_USE_INTERNAL_PRINTF though: the
linker provides no (easy) way to replace printf with the custom
version as defined in printf.c.
The definition of DEBUG_printf doesn't depend on
MICROPY_USE_INTERNAL_PRINTF so move it out of that preprocessor
block and compile it conditionally just depending on the
MICROPY_DEBUG_PRINTERS macro. This allows a port to use DEBUG_printf
while providing it's own printf definition.
It seems that some cards do not tolerate releasing the card (by setting CS
high) after issuing CMD17 (and 18) and raising it again before reading
data. Somehow this causes the 0xfe data start marker not being read and
SDCard.readinto() is spinning forever (or until this byte is in the data).
This seems to fix weird behviour of SDCard.readblocks() returning different
data, also solved hanging os.mount() for my case with a 16GB Infineon card.
This stackexchange answer gives more context:
https://electronics.stackexchange.com/questions/307214/sd-card-spi-interface-issue-read-operation-returns-0x3f-0xff-instead-of-0x7f-0#307268
Prior to this patch there was a large latency for executing scheduled
callbacks when when Python code is sleeping: at the heart of the
implementation of sleep_ms() is a call to vTaskDelay(1), which always
sleeps for one 100Hz tick, before performing another call to
MICROPY_EVENT_POLL_HOOK.
This patch fixes this issue by using FreeRTOS Task Notifications to signal
the main thread that a new callback is pending.
This patch allows scripts to have more control over the software WDT. If
an instance of machine.WDT is created then the underlying OS is prevented
from feeding the software WDT, and it is up to the user script to feed it
instead via WDT.feed(). The timeout for this WDT is currently fixed and
will be between 1.6 and 3.2 seconds.
A flash erase/write takes a while and during that time tasks may be
scheduled via an IRQ. To prevent overflow of the task queue (and loss of
tasks) call ets_loop_iter() before and after slow flash operations.
Note: if a task is posted to a full queue while a flash operation is in
progress then this leads to a fault when trying to print out the error
message that the queue is full. This patch doesn't try to fix this
particular issue, it just prevents it from happening in the first place.
For generating functions there is no need to wrap the bytecode function in
a generator wrapper instance. Instead the type of the bytecode function
can be changed to mp_type_gen_wrap. This reduces code size and saves a
block of GC heap RAM for each generator.
This feature is controlled at compile time by MICROPY_PY_URE_SUB, disabled
by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
This feature is controlled at compile time by
MICROPY_PY_URE_MATCH_SPAN_START_END, disabled by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
This feature is controlled at compile time by MICROPY_PY_URE_MATCH_GROUPS,
disabled by default.
Thanks to @dmazzella for the original patch for this feature; see #3770.
This function may be called from a UART IRQ, which may interrupt the system
when it is erasing/reading/writing flash. In such a case all code
executing from the IRQ must be in iRAM (because the SPI flash is busy), so
put mp_keyboard_interrupt in iRAM so ctrl-C can be caught during flash
access.
This patch also takes get_fattime out of iRAM and puts it in iROM to make
space for mp_keyboard_interrupt. There's no real need to have get_fattime
in iRAM because it calls other functions in iROM.
Fixes issue #3897.
Before this patch the context manager's __aexit__() method would not be
executed if a return/break/continue statement was used to exit an async
with block. async with now has the same semantics as normal with.
The fix here applies purely to the compiler, and does not modify the
runtime at all. It might (eventually) be better to define new bytecode(s)
to handle async with (and maybe other async constructs) in a cleaner, more
efficient way.
One minor drawback with addressing this issue purely in the compiler is
that it wasn't possible to get 100% CPython semantics. The thing that is
different here to CPython is that the __aexit__ method is not looked up in
the context manager until it is needed, which is after the body of the
async with statement has executed. So if a context manager doesn't have
__aexit__ then CPython raises an exception before the async with is
executed, whereas uPy will raise it after it is executed. Note that
__aenter__ is looked up at the beginning in uPy because it needs to be
called straightaway, so if the context manager isn't a context manager then
it'll still raise an exception at the same location as CPython. The only
difference is if the context manager has the __aenter__ method but not the
__aexit__ method, then in that case uPy has different behaviour. But this
is a very minor, and acceptable, difference.
This will allow implementations other than axTLS.
This commit includes additions of checks and clarifications of exceptions
related to user input.
To make the interface cleaner, I've disallowed switching from encrypt to
decrypt in the same object, as this is not always possible with other
crypto libraries (not all libraries have AES_convert_key like axTLS).
Allow including crypto consts based on compilation settings. Disabled by
default to reduce code size; if one wants extra code readability, can
enable them.
Target RAM size is no longer set using Kconfig options, but instead using
DTS (device tree config). Fortunately, the default is now set to a high
value, so we don't need to use DTS fixup.
CONFIG_NET_NBUF_RX_COUNT no longer exists in Zephyr, for a while. That
means we build with the default RX buf count for a while too, and it works,
so just remove it (instead of switching to what it was renamed to,
CONFIG_NET_PKT_RX_COUNT).
These can be optionally specified, but all ports are expected to be able to
accept them, at the very least ignore, though handling of "type" param
(SOCK_STREAM vs SOCK_DGRAM) is recommended.
The API follows guidelines of https://www.python.org/dev/peps/pep-0272/,
but is optimized for code size, with the idea that full PEP 0272
compatibility can be added with a simple Python wrapper mode.
The naming of the module follows (u)hashlib pattern.
At the bare minimum, this module is expected to provide:
* AES128, ECB (i.e. "null") mode, encrypt only
Implementation in this commit is based on axTLS routines, and implements
following:
* AES 128 and 256
* ECB and CBC modes
* encrypt and decrypt
Prior to this patch, if both USB FS and HS were enabled via the
configuration file then code was included to handle both of their IRQs.
But mboot only supports listening on a single USB peripheral, so this patch
excludes the code for the USB that is not used.
Only one of pcd_fs_handle/pcd_hs_handle is ever initialised, so if both of
these USB peripherals are enabled then one of these if-statements will
access invalid memory pointed to by an uninitialised Instance. This patch
fixes this bug by explicitly referencing the peripheral struct.
This patch adds support to mboot for programming external SPI flash. It
allows SPI flash to be programmed via a USB DFU utility in the same way
that internal MCU flash is programmed.
Prior to this patch the QSPI driver assumed that the length of all data
reads and writes was a multiple of 4. This patch allows any length. Reads
are optimised for speed by using 32-bit transfers when possible, but writes
always use a byte transfer because they only use a single data IO line and
are relatively slow.
The DMA peripheral is limited to transferring 65535 elements at a time so
in order to send more than that the SPI driver must split the transfers up.
The user must be aware of this limit if they are relying on precise timing
of the entire SPI transfer, because there might be a small delay between
the split transfers.
Fixes issue #3851, and thanks to @kwagyeman for the original fix.
This behaviour of a NULL write C method on a stream that uses the write
adaptor objects is no longer supported. It was only ever used by the
coverage build for testing the fail path of mp_get_stream_raise().
With this patch objects are only checked that they have the stream protocol
at the start of their use as a stream, and afterwards the efficient
mp_get_stream() helper is used to extract the stream protocol C methods.
The existing mp_get_stream_raise() helper does explicit checks that the
input object is a real pointer object, has a non-NULL stream protocol, and
has the desired stream C method (read/write/ioctl). In most cases it is
not necessary to do these checks because it is guaranteed that the input
object has the stream protocol and desired C methods. For example, native
objects that use the stream wrappers (eg mp_stream_readinto_obj) in their
locals dict always have the stream protocol (or else they shouldn't have
these wrappers in their locals dict).
This patch introduces an efficient mp_get_stream() which doesn't do any
checks and just extracts the stream protocol struct. This should be used
in all cases where the argument object is known to be a stream. The
existing mp_get_stream_raise() should be used primarily to verify that an
object does have the correct stream protocol methods.
All uses of mp_get_stream_raise() in py/stream.c have been converted to use
mp_get_stream() because the argument is guaranteed to be a proper stream
object.
This patch improves efficiency of stream operations and reduces code size.
If the user button is held down indefinitely (eg unintenionally, or because
the GPIO signal of the user button is connected to some external device)
then it makes sense to end the reset mode cycle with the default mode of
1, which executes code as normal.
It's possible (at least on F4 MCU's) to have RXNE and STOPF set at the same
time during a call to the slave IRQ handler. In such cases RXNE should be
handled before STOPF so that all bytes are processed before
i2c_slave_process_rx_end() is called.
Due to buffering of outgoing bytes on the I2C bus, detection of a NACK
using the ISR_NACKF flag needs to account for the case where ISR_NACKF
corresponds to the previous-to-previous byte.
This patch renames the existing SPI flash API functions to reflect the fact
that the go through the cache:
mp_spiflash_flush -> mp_spiflash_cache_flush
mp_spiflash_read -> mp_spiflash_cached_read
mp_spiflash_write -> mp_spiflash_cached_write
This patch removes the global cache variables from the SPI flash driver and
now requires the user to provide the cache memory themselves, via the SPI
flash configuration struct. This allows to either have a shared cache for
multiple SPI flash devices (by sharing a mp_spiflash_cache_t struct), or
have a single cache per device (or a mix of these options).
To configure the cache use:
mp_spiflash_cache_t spi_bdev_cache;
const mp_spiflash_config_t spiflash_config =
// any bus options
.cache = &spi_bdev_cache,
};
This patch changes dupterm to call the native C stream methods on the
connected stream objects, instead of calling the Python readinto/write
methods. This is much more efficient for native stream objects like UART
and webrepl and doesn't require allocating a special dupterm array.
This change is a minor breaking change from the user's perspective because
dupterm no longer accepts pure user stream objects to duplicate on. But
with the recent addition of uio.IOBase it is possible to still create such
classes just by inheriting from uio.IOBase, for example:
import uio, uos
class MyStream(uio.IOBase):
def write(self, buf):
# existing write implementation
def readinto(self, buf):
# existing readinto implementation
uos.dupterm(MyStream())
Via the config value MICROPY_PY_UHASHLIB_SHA256. Default to enabled to
keep backwards compatibility.
Also add default value for the sha1 class, to at least document its
existence.
For consistency with other modules, and to help avoid clashes with the
actual underlying functions that do the hashing (eg
crypto-algorithms/sha256.c:sha256_update).
A user class derived from IOBase and implementing readinto/write/ioctl can
now be used anywhere a native stream object is accepted.
The mapping from C to Python is:
stream_p->read --> readinto(buf)
stream_p->write --> write(buf)
stream_p->ioctl --> ioctl(request, arg)
Among other things it allows the user to:
- create an object which can be passed as the file argument to print:
print(..., file=myobj), and then print will pass all the data to the
object via the objects write method (same as CPython)
- pass a user object to uio.BufferedWriter to buffer the writes (same as
CPython)
- use select.select on a user object
- register user objects with select.poll, in particular so user objects can
be used with uasyncio
- create user files that can be returned from user filesystems, and import
can import scripts from these user files
For example:
class MyOut(io.IOBase):
def write(self, buf):
print('write', repr(buf))
return len(buf)
print('hello', file=MyOut())
The feature is enabled via MICROPY_PY_IO_IOBASE which is disabled by
default.
This patch adds the gc_sweep_all() function which does a garbage collection
without tracing any root pointers, so frees all the memory, and most
importantly runs any remaining finalisers.
This helps primarily for soft reset: it will close any open files, any open
sockets, and help to get the system back to a clean state upon soft reset.
The ST DFU bootloader supports a transfer size up to 2048 bytes, so send
that much data on each download (to device) packet. This almost halves
total download time.
The DFU USB config descriptor returns 0x0800=2048 for the supported
transfer size, and this applies to both TX (IN) and RX (OUT). So increase
the rx_buf to support this size without having a buffer overflow on
received data.
With this patch mboot in USB DFU mode now works with dfu-util.
Currently <WLAN>.isconnected() always returns True if a static IP is set,
regardless of the state of the connection.
This patch introduces a new flag 'wifi_sta_connected' which is set in
event_handler() when GOT_IP event is received and reset when DISCONNECTED
event is received (unless re-connect is successful). isconnected() now
simply returns the status of this flag (for STA_IF).
The pre-existing flag misleadingly named 'wifi_sta_connected" is also
renamed to 'wifi_sta_connect_requested'.
Fixes issue #3837
For i2c.py: the accelerometer now uses the new I2C driver so need to
explicitly init the legacy i2c object to get the test working.
For pyb1.py: the legacy pyb.hid() call will crash if the USB_HID object is
not initialised.
MICROPY_PY_DELATTR_SETATTR can now be enabled without a performance hit for
classes that don't use this feature.
MICROPY_PY_BUILTINS_NOTIMPLEMENTED is a minor addition that improves
compatibility with CPython.
They are now efficient (in runtime performance) and provide a useful
feature that's hard to obtain without them enabled.
See issue #3644 and PR #3826 for background.
This patch is a code optimisation, trading text bytes for speed. On
pyboard it's an increase of 0.06% in code size for a gain (in pystone
performance) of roughly 6.5%.
The patch optimises load/store/delete of attributes in user defined classes
by not looking up special accessors (@property, __get__, __delete__,
__set__, __setattr__ and __getattr_) if they are guaranteed not to exist in
the class.
Currently, if you do my_obj.foo() then the runtime has to do a few checks
to see if foo is a property or has __get__, and if so delegate the call.
And for stores things like my_obj.foo = 1 has to first check if foo is a
property or has __set__ defined on it.
Doing all those checks each and every time the attribute is accessed has a
performance penalty. This patch eliminates all those checks for cases when
it's guaranteed that the checks will always fail, ie no attributes are
properties nor have any special accessor methods defined on them.
To make this guarantee it checks all attributes of a user-defined class
when it is first created. If any of the attributes of the user class are
properties or have special accessors, or any of the base classes of the
user class have them, then it sets a flag in the class to indicate that
special accessors must be checked for. Then in the load/store/delete code
it checks this flag to see if it can take the shortcut and optimise the
lookup.
It's an optimisation that's pretty widely applicable because it improves
lookup performance for all methods of user defined classes, and stores of
attributes, at least for those that don't have special accessors. And, it
allows to enable descriptors with minimal additional runtime overhead if
they are not used for a particular user class.
There is one restriction on dynamic class creation that has been introduced
by this patch: a user-defined class cannot go from zero special accessors
to one special accessor (or more) after that class has been subclassed. If
the script attempts this an AttributeError is raised (see addition to
tests/misc/non_compliant.py for an example of this case).
The cost in code space bytes for the optimisation in this patch is:
unix x64: +528
unix nanbox: +508
stm32: +192
cc3200: +200
esp8266: +332
esp32: +244
Performance tests that were done:
- on unix x86-64, pystone improved by about 5%
- on pyboard, pystone improved by about 6.5%, from 1683 up to 1794
- on pyboard, bm_chaos (from CPython benchmark suite) improved by about 5%
- on esp32, pystone improved by about 30% (but there are caching effects)
- on esp32, bm_chaos improved by about 11%
Coveralls requires a "recent" version of urllib3, whereas requests requires
a "not so recent" version, less than 1.23. So force urllib3 v1.22 to get
it all working.
Following other C-level protocols, this VFS protocol is added to help
abstract away implementation details of the underlying VFS in an efficient
way. As a starting point, the import_stat function is put into this
protocol so that the VFS sub-system does not need to know about every VFS
implementation in order to do an efficient stat for importing files.
In the future it might be worth adding other functions to this protocol.
Now that the coverage build has fully switched to the VFS sub-system these
functions were no longer available, so add them to the uos_vfs module.
Also, vfs_open is no longer needed, it's available as the built-in open.
This conditional import was only used to get the tests working on the unix
coverage build, which has now switched to use VFS by default so the uos
module alone has the required functionality.
The unix coverage build is now switched fully to the VFS implementation, ie
the uos module is the uos_vfs module. For example, one can now sandbox uPy
to their home directory via:
$ ./micropython_coverage
>>> import uos
>>> uos.umount('/') # unmount existing root VFS
>>> vfs = uos.VfsPosix('/home/user') # create new POSIX VFS
>>> uos.mount(vfs, '/') # mount new POSIX VFS at root
Some filesystem/OS features may no longer work with the coverage build due
to this change, and these need to be gradually fixed.
The standard unix port remains unchanged, it still uses the traditional uos
module which directly accesses the underlying host filesystem.
This VFS component allows to mount a host POSIX filesystem within the uPy
VFS sub-system. All traditional POSIX file access then goes through the
VFS, allowing to sandbox a uPy process to a certain sub-dir of the host
system, as well as mount other filesystem types alongside the host
filesystem.
This patch adds support for building the firmware with external SPI RAM
enabled. It is disabled by default because it adds overhead (due to
silicon workarounds) and reduces performance (because it's slower to have
bytecode and objects stored in external RAM).
To enable it, either use "make CONFIG_SPIRAM_SUPPORT=1", or add this line
to you custom makefile/GNUmakefile (before "include Makefile"):
CONFIG_SPIRAM_SUPPORT = 1
When this option is enabled the MicroPython heap is automatically allocated
in external SPI RAM.
Thanks to Angus Gratton for help with the compiler and linker settings.
Since a long time now, mp_obj_type_t no longer refers explicitly to
mp_stream_p_t but rather to an abstract "const void *protocol". So there's
no longer any need to define mp_stream_p_t in obj.h and it can go with all
its associated definitions in stream.h. Pretty much all users of this type
will already include the stream header.
The Wiznet5k series of chips support a MACRAW mode which allows the host to
send and receive Ethernet frames directly. This can be hooked into the
lwIP stack to provide a full "socket" implementation using this Wiznet
Ethernet device. This patch adds support for this feature.
To enable the feature one must add the following to mpconfigboard.mk, or
mpconfigport.mk:
MICROPY_PY_WIZNET5K = 5500
and the following to mpconfigboard.h, or mpconfigport.h:
#define MICROPY_PY_LWIP (1)
After wiring up the module (X5=CS, X4=RST), usage on a pyboard is:
import time, network
nic = network.WIZNET5K(pyb.SPI(1), pyb.Pin.board.X5, pyb.Pin.board.X4)
nic.active(1)
while not nic.isconnected():
time.sleep_ms(50) # needed to poll the NIC
print(nic.ifconfig())
Then use the socket module as usual.
Compared to using the built-in TCP/IP stack on the Wiznet module, some
performance is lost in MACRAW mode: with a lot of memory allocated to lwIP
buffers, lwIP gives Around 750,000 bytes/sec max TCP download, compared
with 1M/sec when using the TCP/IP stack on the Wiznet module.
It should be up to the NIC itself to decide if the network interface is
removed upon soft reset. Some NICs can keep the interface up over a soft
reset, which improves usability of the network.
If mbedtls_ctr_drbg_seed() is available in the mbedtls bulid then so should
be mbedtls_entropy_func(). Then it's up to the port to configure a valid
entropy source, eg via MBEDTLS_ENTROPY_HARDWARE_ALT.
Otherwise the "sock" member may have an undefined value if wrap_socket
fails with an exception and exits early, and then if the finaliser runs it
will try to close an invalid stream object.
Fixes issue #3828.
Pins with multiple alt-funcs for the same peripheral (eg USART_CTS_NSS)
need to be split into individual alt-funcs for make-pins.py to work
correctly.
This patch changes the following:
- Split `..._CTS_NSS` into `..._CTS/..._NSS`
- Split `..._RTS_DE` into `..._RTS/..._DE`
- Split `JTDO_SWO` into `JTDO/TRACESWO` for consistency
- Fixed `TRACECK` to `TRACECLK` for consistency
If no block devices are defined by a board then storage support will be
disabled. This means there is no filesystem provided by either the
internal flash or external SPI flash. But the VFS system can still be
enabled and filesystems provided on external devices like an SD card.
Mboot is a custom bootloader for STM32 MCUs. It can provide a USB DFU
interface on either the FS or HS peripherals, as well as a custom I2C
bootloader interface.
These files provide no additional information, all the version and license
information is captured in the relevant files in these subdirectories.
Thanks to @JoeSc for the original patch.
The code_state.old_globals variable is there to save the globals state so
should be used for this purpose, to avoid the need for additional local
variables on the C stack.
This patch allows to use lwIP as the implementation of the usocket module,
instead of the existing socket-multiplexer that delegates the entire TCP/IP
layer to the NIC itself.
This is disabled by default, and enabled by defining MICROPY_PY_LWIP to 1.
When enabled, the lwIP TCP/IP stack will be included in the build with
default settings for memory usage and performance (see
lwip_inc/lwipopts.h). It is then up to a particular NIC to register itself
with lwIP using the standard lwIP netif API.
Without this, if GC threshold is hit and there is not enough memory left to
satisfy the request, gc_collect() will run a second time and the search for
memory will happen again and will fail again.
Thanks to @adritium for pointing out this issue, see #3786.
Under ubsan, when evaluating hash(-0.) the following diagnostic occurs:
../../py/objfloat.c:102:15: runtime error: negation of
-9223372036854775808 cannot be represented in type 'mp_int_t' (aka
'long'); cast to an unsigned type to negate this value to itself
So do just that, to tell the compiler that we want to perform this
operation using modulo arithmetic rules.
Before this, ubsan would detect a problem when executing
hash(006699999999999999999999999999999999999999999999999999999999999999999999)
../../py/mpz.c:1539:20: runtime error: left shift of 1067371580458 by
32 places cannot be represented in type 'mp_int_t' (aka 'long')
When the overflow does occur it now happens as defined by the rules of
unsigned arithmetic.
When computing e.g. hash(0.4e3) with ubsan enabled, a diagnostic like the
following would occur:
../../py/objfloat.c:91:30: runtime error: shift exponent 44 is too
large for 32-bit type 'int'
By casting constant "1" to the right type the intended value is preserved.
Fuzz testing combined with the undefined behavior sanitizer found that
parsing unreasonable float literals like 1e+9999999999999 resulted in
undefined behavior due to overflow in signed integer arithmetic, and a
wrong result being returned.
There is no need to use the mp_int_t type which may be 64-bits wide, there
is enough bit-width in a normal int to parse reasonable exponents. Using
int helps to reduce code size for 64-bit ports, especially nan-boxing
builds. (Similarly for the "dig" variable which is now an unsigned int.)
Calling memset(NULL, value, 0) is not standards compliant so we must add an
explicit check that emit->label_offsets is indeed not NULL before calling
memset (this pointer will be NULL on the first pass of the parse tree and
it's more logical / safer to check this pointer rather than check that the
pass is not the first one).
Code sanitizers will warn if NULL is passed as the first value to memset,
and compilers may optimise the code based on the knowledge that any pointer
passed to memset is guaranteed not to be NULL.
This patch makes it so that UART(0) can by dynamically attached to and
detached from the REPL by using the uos.dupterm function. Since WebREPL
uses dupterm slot 0 the UART uses dupterm slot 1 (a slot which is newly
introduced by this patch). UART(0) must now be attached manually in
boot.py (or otherwise) and inisetup.py is changed to provide code to do
this. For example, to attach use:
import uos, machine
uart = machine.UART(0, 115200)
uos.dupterm(uart, 1)
and to detach use:
uos.dupterm(None, 1)
When attached, all incoming chars on UART(0) go straight to stdin so
uart.read() will always return None. Use sys.stdin.read() if it's needed
to read characters from the UART(0) while it's also used for the REPL (or
detach, read, then reattach). When detached the UART(0) can be used for
other purposes.
If there are no objects in any of the dupterm slots when the REPL is
started (on hard or soft reset) then UART(0) is automatically attached.
Without this, the only way to recover a board without a REPL would be to
completely erase and reflash (which would install the default boot.py which
attaches the REPL).
Add CONFIG_NET_DHCPV4, which, after
https://github.com/zephyrproject-rtos/zephyr/pull/5750 works as follows:
static addresses are configured after boot, and DHCP requests are sent
at the same time. If valid DHCP reply is received, it overrides static
addresses.
This setup works out of the box for both direct connection to a
workstation (DHCP server usually is not available) and for connection
to a router (DHCP is available and required).
Before this patch:
>>> print(')
... ')
Traceback (most recent call last):
File "<stdin>", line 1
SyntaxError: invalid syntax
After this patch:
>>> print(')
Traceback (most recent call last):
File "<stdin>", line 1
SyntaxError: invalid syntax
This matches CPython and prevents getting stuck in REPL continuation when a
1-quote is unmatched.
Before this patch, when using the switch statement for dispatch in the VM
(not computed goto) a pending exception check was done after each opcode.
This is not necessary and this patch makes the pending exception check only
happen when explicitly requested by certain opcodes, like jump. This
improves performance of the VM by about 2.5% when using the switch.
This matches CPython behaviour on Linux: a socket that is new and not
listening or connected is considered "hung up".
Thanks to @rkojedzinszky for the initial patch, PR #3457.
This patch fixes the macro so you can pass any name in, and the macro will
make more sense if you're reading it on its own. It worked previously
because n_state is always passed in as n_state_out_var.
gcc 8.0 supports the naked attribute for x86 systems so it can now be used
here. And in fact it is necessary to use this for nlr_push because gcc 8.0
no longer generates a prelude for this function (even without the naked
attribute).
This patch adds the configuration MICROPY_HW_USB_ENABLE_CDC2 which enables
a new USB device configuration at runtime: VCP+VCP+MSC. It will give two
independent VCP interfaces available via pyb.USB_VCP(0) and pyb.USB_VCP(1).
The first one is the usual one and has the REPL on it. The second one is
available for general use.
This configuration is disabled by default because if the mode is not used
then it takes up about 2200 bytes of RAM. Also, F4 MCUs can't support this
mode on their USB FS peripheral (eg PYBv1.x) because they don't have enough
endpoints. The USB HS peripheral of an F4 supports it, as well as both the
USB FS and USB HS peripherals of F7 MCUs.
The documentation (including the examples) for elapsed_millis and
elapsed_micros can be found in docs/library/pyb.rst so doesn't need to be
written in full in the source code.
When disabled, the pyb.I2C class saves around 8k of code space and 172
bytes of RAM. The same functionality is now available in machine.I2C
(for F4 and F7 MCUs).
It is still enabled by default.
This driver uses low-level register access to control the I2C peripheral
(ie it doesn't rely on the ST HAL) and provides the same C-level API as the
existing F7 hardware driver.
- Updated supported git hash to current IDF version.
- Added missing targets and includes to Makefile.
- Updated error codes for networking module.
- Added required constant to sdkconfig configuration.
This patch moves the start of the root pointer section in mp_state_ctx_t
so that it skips entries that are not pointers and don't need scanning.
Previously, the start of the root pointer section was at the very beginning
of the mp_state_ctx_t struct (which is the beginning of mp_state_thread_t).
This was the original assembler version of the NLR code was hard-coded to
have the nlr_top pointer at the start of this state structure. But now
that the NLR code is partially written in C there is no longer this
restriction on the location of nlr_top (and a comment to this effect has
been removed in this patch).
So now the root pointer section starts part way through the
mp_state_thread_t structure, after the entries which are not root pointers.
This patch also moves the non-pointer entries for MICROPY_ENABLE_SCHEDULER
outside the root pointer section.
Moving non-pointer entries out of the root pointer section helps to make
the GC more precise and should help to prevent some cases of collectable
garbage being kept.
This patch also has a measurable improvement in performance of the
pystone.py benchmark: on unix x86-64 and stm32 there was an improvement of
roughly 0.6% (tested with both gcc 7.3 and gcc 8.1).
This patch changes 2 things in the endianness detection:
1. Don't assume that __BYTE_ORDER__ not being __ORDER_LITTLE_ENDIAN__ means
that the machine is big endian, so add an explicit check that this macro
is indeed __ORDER_BIG_ENDIAN__ (same with __BYTE_ORDER, __LITTLE_ENDIAN
and __BIG_ENDIAN). A machine could have PDP endianness.
2. Remove the checks which base their autodetection decision on whether any
little or big endian macros are defined (eg __LITTLE_ENDIAN__ or
__BIG_ENDIAN__). Just because a system defines these does not mean it
has that endianness.
See issue #3760.
Printing of uPy floats can differ by the floating-point precision on
different architectures (eg 64-bit vs 32-bit x86), so it's not possible to
using printing of floats in some parts of this test. Instead we can just
check for equivalence with what is known to be the correct answer.
For cases where size_t is smaller than mp_int_t (eg nan-boxing builds) the
difference between two size_t's is not sign extended into mp_int_t and so
the result is never negative. This patch fixes this bug by using ssize_t
for the type of the result.
This gives dir() better behaviour when listing the attributes of a user
type that defines __getattr__: it will now not list those attributes for
which __getattr__ raises AttributeError (meaning the attribute is not
supported by the object).
This patch fixes the possibility of a crash of the REPL when tab-completing
an object which raises an exception when its attributes are accessed.
See issue #3729.
This new helper function acts like mp_load_method_maybe but is wrapped in
an NLR handler so it can catch exceptions. It prevents AttributeError from
propagating out, and optionally all other exceptions. This helper can be
used to fully implement hasattr (see follow-up commit), and also for cases
where mp_load_method_maybe is used but it must now raise an exception.
Commit e269cabe3e added a check that the
first argument to the to_bytes() method is an integer, and now uPy
follows CPython behaviour and raises a TypeError for this test.
Note: CPython checks the argument types before checking the number of
arguments, but uPy does it the other way around, so they give different
exception messages for this test, but still the same type, a TypeError.
On this 32-bit arch there's no need to use the long version of the format
specifier. It's only there to appease the compiler which checks the type
of the args passed to printf. Removing the "l" saves a bit of code space.
The order of function calls in an arithmetic expression is undefined and so
they must be written out as sequential statements.
Thanks to @dv-extrarius for reporting this issue, see issue #3690.
If a socket is cleanly shut down by the peer then reads on this socket
should continue to return zero bytes. The lwIP socket API does not have
this behaviour (it only returns zero once, then blocks on subsequent calls)
so this patch adds explicit checks and logic for peer closed sockets.
Add --init to the submodule update example, thus, all submodules get
initialised including the nested (--recursive) ones. Without it there
might not be a submodule init.
Disabling this saves around 6000 bytes of code space and gets the 512k
build fitting in the available flash again (it increased lately due to an
increase in the size of the ESP8266 SDK).
In adcall.py the pyb module may not be imported, so use ADCAll directly.
In dac.py the DAC object now prints more info, so update .exp file.
In spi.py the SPI should be deinitialised upon exit, so the test can run a
second time correctly.
For a given IRQn (eg UART) there's no need to carry around both a PRI and
SUBPRI value (eg IRQ_PRI_UART, IRQ_SUBPRI_UART). Instead, the IRQ_PRI_UART
value has been changed in this patch to be the encoded hardware value,
using NVIC_EncodePriority. This way the NVIC_SetPriority function can be
used directly, instead of going through HAL_NVIC_SetPriority which must do
extra processing to encode the PRI+SUBPRI.
For a priority grouping of 4 (4 bits for preempt priority, 0 bits for the
sub-priority), which is used in the stm32 port, the IRQ_PRI_xxx constants
remain unchanged in their value.
This patch also "fixes" the use of raise_irq_pri() which should be passed
the encoded value (but as mentioned above the unencoded value is the same
as the encoded value for priority grouping 4, so there was no bug from this
error).
The problem is the existing code which tries to optimise the
reinitialisation of the DMA breaks the abstraction of the HAL. For the
STM32L4 the HAL's DMA setup code maintains two private vars (ChannelIndex,
DmaBaseAddress) and updates a hardware register (CCR).
In HAL_DMA_Init(), the CCR is updated to set the direction of the DMA.
This is a problem because, when using the SD Card interface, the same DMA
channel is used in both directions, so the direction bit in the CCR must
follow that.
A quick and effective fix for the L4 is to simply call HAL_DMA_DeInit() and
HAL_DMA_Init() every time.
This is a more consistent use of errno codes. For example, it may be that
a stream returns MP_EAGAIN but the mp_is_nonblocking_error() macro doesn't
catch this value because it checks for EAGAIN instead (which may have a
different value than MP_EAGAIN when MICROPY_USE_INTERNAL_ERRNO is enabled).
Most modern systems have EWOULDBLOCK aliased to EAGAIN, ie they have the
same value. But some systems use different values for these errnos and if
a uPy port is using the system errno values (ie not the internal uPy
values) then it's important to be able to distinguish EWOULDBLOCK from
EAGAIN. Eg if a system call returned EWOULDBLOCK it must be possible to
check for this return value, and this patch makes this now possible.
If MICROPY_USE_INTERNAL_ERRNO is disabled, MP_EINVAL is not guaranteed
to have the value 22, so we cannot depend on OSError(22,).
Instead, to support any given port's errno values, without relying
on uerrno, we just check that the args[0] is positive.
ADC3 is used because the H7's internal ADC channels are connected to ADC3
and the uPy driver doesn't support more than one ADC.
Only 12-bit resolution is supported because 12 is hard-coded and 14/16 bits
are not recommended on some ADC3 pins (see errata).
Values from internal ADC channels are known to give wrong values at
present.
The esp8266 uses modlwip.c for its usocket implementation, which allows to
easily support callbacks on socket events (like when a socket becomes ready
for reading). This is not as easy to do for the esp32 which uses the
ESP-IDF-provided lwIP POSIX socket API. Socket events are needed to get
WebREPL working, and this patch provides a way for such events to work by
explicitly polling registered sockets for readability, and then calling the
associated callback if the socket is readable.
After calling HAL_SYSTICK_Config the SysTick IRQ priority is set to 15, the
lowest priority. This commit reconfigures the IRQ priority to the desired
TICK_INT_PRIORITY value.
By default the stm module is included in the build, but a board can now
define MICROPY_PY_STM to 0 to not include this module. This reduces the
firmware by about 7k.
To use HSE bypass mode the board should define:
#define MICROPY_HW_CLK_USE_BYPASS (1)
If this is not defined, or is defined to 0, then HSE oscillator mode is
used.
This patch allows a given board to configure which pins are used for the
CAN peripherals, in a similar way to all the other bus peripherals (I2C,
UART, SPI). To enable CAN on a board the mpconfigboard.h file should
define (for example):
#define MICROPY_HW_CAN1_TX (pin_B9)
#define MICROPY_HW_CAN1_RX (pin_B8)
#define MICROPY_HW_CAN2_TX (pin_B13)
#define MICROPY_HW_CAN2_RX (pin_B12)
And the board config file should no longer define MICROPY_HW_ENABLE_CAN.
The individual union members (like SPI, I2C) are never used, only the
generic "reg" entry is. And the union names can clash with macro
definitions in the HAL so better to remove them.
The only configuration that changes with this patch is that on L4 MCUs the
clock prescaler changed from ADC_CLOCK_ASYNC_DIV2 to ADC_CLOCK_ASYNC_DIV1
for the ADCAll object. This should be ok.
A value of DISABLE for EOCSelection is invalid. This would have been
interpreted instead as ADC_EOC_SEQ_CONV, but really it should be
ADC_EOC_SINGLE_CONV for the uses in this code. So this has been fixed.
ExternalTrigConv should be ADC_SOFTWARE_START because all ADC
conversions are started by software. This is now fixed.
This can be used to select the output buffer behaviour of the DAC. The
default values are chosen to retain backwards compatibility with existing
behaviour.
Thanks to @peterhinch for the initial idea to add this feature.
Reading into a bytearray will truncate values to 0xff so the assertions
checking read_timed() would previously always succeed.
Thanks to @peterhinch for finding this problem and providing the solution.
This event queue has UART events posted to it and they need to be drained
for it to operate without error. The queue is not used by the uPy UART
class so it should be removed to prevent the IDF emitting errors.
Fixes#3704.
Instead of emitnative.c having configuration code for each supported
architecture, and then compiling this file multiple times with different
macros defined, this patch adds a file per architecture with the necessary
code to configure the native emitter. These files then #include the
emitnative.c file.
This simplifies emitnative.c (which is already very large), and simplifies
the build system because emitnative.c no longer needs special handling for
compilation and qstr extraction.
Keeping all the stress related tests in one place makes it easier to
stress-test a given port, and to also not run such tests on ports that
can't handle them.
The 2nd and 3rd args of the ternary operator are treated like they are in
the same expression and must have similar types. void is not compatible
with int so that's why the compiler is complaining.
This patch moves the implementation of stream closure from a dedicated
method to the ioctl of the stream protocol, for each type that implements
closing. The benefits of this are:
1. Rounds out the stream ioctl function, which already includes flush,
seek and poll (among other things).
2. Makes calling mp_stream_close() on an object slightly more efficient
because it now no longer needs to lookup the close method and call it,
rather it just delegates straight to the ioctl function (if it exists).
3. Reduces code size and allows future types that implement the stream
protocol to be smaller because they don't need a dedicated close method.
Code size reduction is around 200 bytes smaller for x86 archs and around
30 bytes smaller for the bare-metal archs.
The LHS passed to mp_obj_int_binary_op() will always be an integer, either
a small int or a big int, so the test for this type doesn't need to include
an "other, unsupported type" case.
The string "Q+?" is special in that it hashes to zero with the djb2
algorithm (among other strings), and a zero hash should be incremented to a
hash of 1.
Without the compiler enabled the mp_optimise_value is unused, and the
micropython.opt_level() function is not useful, so exclude these from the
build to save RAM and code size.
The case of a return statement in the try suite of a try-except statement
was previously only tested by builtin_compile.py, and only then in the part
of this test which checked for the existence of the compile builtin. So
this patch adds an explicit unit test for this case.
When pystack is enabled mp_obj_fun_bc_prepare_codestate() will always
return a valid pointer, and if there is no more pystack available then it
will raise an exception (a RuntimeError). So having pystack enabled with
stackless enabled automatically gives strict stackless mode. There is
therefore no need to have code for strict stackless mode when pystack is
enabled, and this patch optimises the VM for such a case.
The VM expects that, if mp_resume() returns MP_VM_RETURN_EXCEPTION, then
the returned value is an exception instance (eg to add a traceback to it).
It's possible that a value passed to a generator's throw() is not an
exception so must be explicitly checked for if the thrown value is not
intercepted by the generator.
Thanks to @jepler for finding the bug.
Prior to this patch the code would crash if a key in a ** dict was anything
other than a str or qstr. This is because mp_setup_code_state() assumes
that keys in kwargs are qstrs (for efficiency).
Thanks to @jepler for finding the bug.
The main() function has a predefined type in C which is not so useful for
embedded contexts. This patch renames main() to stm32_main() so we can
define our own type signature for this function. The type signature is
defined to have a single argument which is the "reset_mode" and is passed
through as r0 from Reset_Handler. This allows, for example, a bootloader
to pass through information into the main application.
The Reset_Handler needs to copy the data section and zero the BSS, and
these operations should be as optimised as possible to reduce start up
time. The versions provided in this patch are about 2x faster (on a Cortex
M4) than the previous implementations.
Rather than pin objects themselves. The actual object is now pin_X_obj and
defines are provided so that pin_X is &pin_X_obj. This makes it so that
code that uses pin objects doesn't need to know if they are literals or
objects (that need pointers taken) or something else. They are just
entities that can be passed to the map_hal_pin_xxx functions. This mirrors
how the core handles constant objects (eg mp_const_none which is
&mp_const_none_obj) and allows for the possibility of different
implementations of the pin layer.
For example, prior to this patch there was the following:
extern const pin_obj_t pin_A0;
#define pyb_pin_X1 pin_A0
...
mp_hal_pin_high(&pin_A0);
and now there is:
extern const pin_obj_t pin_A0_obj;
#define pin_A0 (&pin_A0_obj)
#define pyb_pin_X1 pin_A0
...
mp_hal_pin_high(pin_A0);
This patch should have minimal effect on board configuration files. The
only change that may be needed is if a board has .c files that configure
pins.
This patch forces a board to explicitly define TEXT1_ADDR in order to
split the firmware into two separate pieces. Otherwise the default is now
to produce only a single continuous firmware image with all ISR, text and
data together.
This patch allows a particular board to independently specify the linker
scripts for 1) the MCU memory layout; 2) how the different firmware
sections are arranged in memory. Right now all boards follow the same
layout with two separate firmware section, one for the ISR and one for the
text and data. This leaves room for storage (filesystem data) to live
between the firmware sections.
The idea with this patch is to accommodate boards that don't have internal
flash storage and only need to have one continuous firmware section. Thus
the common.ld script is renamed to common_ifs.ld to make explicit that it
is used for cases where the board has internal flash storage.
Explicitly writing out the implementation of sys_tick_has_passed makes
these bdev files independent of systick.c and more reusable as a general
component. It also reduces the code size slightly.
The irq.h header is added to spibdev.c because it uses declarations in that
file (irq.h is usually included implicitly via mphalport.h but not always).
Taking the address assumes that the pin is an object (eg a struct), but it
could be a literal (eg an int). Not taking the address makes this driver
more general for other uses.
genhdr/pins.h is an internal header file that defines all of the pin
objects and it's cleaner to have pin.h include it (where the struct's for
these objects are defined) rather than an explicit include by every user.
The HAL requires strict aliasing optimisation to be turned on to function
correctly (at least for the SD card driver on F4 MCUs). This optimisation
was recently disabled with the addition of H7 support due to the H7 HAL
having errors with the strict aliasing optimisation enabled. But this is
now fixed in the latest stm32lib and so the optimisation can now be
re-enabled.
Thanks to @chuckbook for finding that there was a problem with the SD card
on F4 MCUs with the strict aliasing optimisation disabled.
The CMSIS files for the STM32 range provide macros to distinguish between
the different MCU series: STM32F4, STM32F7, STM32H7, STM32L4, etc. Prefer
to use these instead of custom ones.
By using pre-compiled regexs, using startswith(), and explicitly checking
for empty lines (of which around 30% of the input lines are), automatic
qstr extraction is speed up by about 10%.
This patch provides a custom (and simple) function to receive data on the
CAN bus, instead of the HAL function. This custom version calls
mp_handle_pending() while waiting for messages, which, among other things,
allows to interrupt the recv() method via KeyboardInterrupt.
mp_spiflash_read had a bug in it where "dest" and "addr" were incremented
twice for a certain special case. This was fixed, which then allowed the
function to be simplified to reduce code size.
mp_spiflash_write had a bug in it where "src" was not incremented correctly
for the case where the data to be written included the caching buffer as
well as some bytes after this buffer. This was fixed and the resulting
code simplified.
Certain pins (eg 4 and 5) seem to behave differently at the hardware level
when in open-drain mode: they glitch when set "high" and drive the pin
active high for a brief period before disabling the output driver. To work
around this make the pin an input to let it float high.
This config variable controls whether to support storage on the internal
flash of the MCU. It is enabled by default and should be explicitly
disabled by boards that don't want internal flash storage.
It makes it cleaner, and simpler to support multiple different block
devices. It also allows to easily extend a given block device with new
ioctl operations.
This patch alters the SPI-flash memory driver so that it uses the new
low-level C SPI protocol (from drivers/bus/spi.h) instead of the uPy SPI
protocol (from extmod/machine_spi.h). This allows the SPI-flash driver to
be used independently from the uPy runtime.
This patch takes the software SPI implementation from extmod/machine_spi.c
and moves it to a dedicated file in drivers/bus/softspi.c. This allows the
SPI driver to be used independently of the uPy runtime, making it a more
general component.
The PWM at full value was not considered as an "active" channel so if no
other channel was used the timer used to mange PWM was not started. So
when another duty value was set the PWM timer restarted and there was a
visible glitch when driving LEDs. Such a glitch can be seen with the
following code (assuming active-low LED on pin 0):
p = machine.PWM(machine.Pin(0))
p.duty(1023) # full width, LED is off
p.duty(1022) # LED flashes brightly then goes dim
This patch fixes the glitch.
This is just to test that the function exists and returns some kind of
valid value. Although this file is for testing ms/us functions, put the
ticks_cpu() test here so not to add a new test file.
The spiflash memory driver is reworked to allow the underlying bus to be
either normal SPI or QSPI. In both cases the bus can be implemented in
software or hardware, as long as the spiflash driver is passed the correct
configuration structure.
All callers of mp_obj_int_formatted() are expected to pass in a valid int
object, and they do:
- mp_obj_int_print() should always pass through an int object because it is
the print special method for int instances.
- mp_print_mp_int() checks that the argument is an int, and if not converts
it to a small int.
This patch saves around 20-50 bytes of code space.
This test for calling gc_realloc() while the GC is locked can be done in
pure Python, so better to do it that way since it can then be tested on
more ports.
Prior to this patch, some architectures (eg unix x86) could render floats
with "negative" digits, like ")". For example, '%.23e' % 1e-80 would come
out as "1.0000000000000000/)/(,*0e-80". This patch fixes the known cases.
Prior to this patch, some architectures (eg unix x86) could render floats
with a ":" character in them, eg 1e+39 would come out as ":e+38" (":" is
just after "9" in ASCII so this is like 10e+38). This patch fixes some of
these cases.
Prior to this patch the %f formatting of some FP values could be off by up
to 1, eg '%.0f' % 123 would return "122" (unix x64). Depending on the FP
precision (single vs double) certain numbers would format correctly, but
others wolud not. This patch should fix all cases of rounding for %f.
This patch eliminates heap allocation in the VFS FAT disk IO layer, when
calling the underlying readblocks/writeblocks methods. The bytearray
object that is passed to these methods is now allocated on the C stack
rather than the heap (it's only 4 words big).
This means that these methods should not retain a pointer to the buffer
object that is passed in, but this was already a restriction because the
original heap-allocated bytearray had its buffer passed by reference.
There's no need to have MP_OBJ_NULL a special case, the code can re-use
the MP_OBJ_STOP_ITERATION value to signal the special case and the VM can
detect this with only one check (for MP_OBJ_STOP_ITERATION).
This patch concerns the handling of an NLR-raised StopIteration, raised
during a call to mp_resume() which is handling the yield from opcode.
Previously, commit 6738c1dded introduced code
to handle this case, along with a test. It seems that it was lucky that
the test worked because the code did not correctly handle the stack pointer
(sp).
Furthermore, commit 79d996a57b improved the
way mp_resume() propagated certain exceptions: it changed raising an NLR
value to returning MP_VM_RETURN_EXCEPTION. This change meant that the
test introduced in gen_yield_from_ducktype.py was no longer hitting the
code introduced in 6738c1dded.
The patch here does two things:
1. Fixes the handling of sp in the VM for the case that yield from is
interrupted by a StopIteration raised via NLR.
2. Introduces a new test to check this handling of sp and re-covers the
code in the VM.
Float parsing (both single and double precision) may have a relative error
of order the floating point precision, so adjust tests to take this into
account by not printing all of the digits of the answer.
These new tests cover cases that can't be reached from Python and get
coverage of py/mpz.c to 100%.
These "unreachable from Python" pieces of code could be removed but they
form an integral part of the mpz C API and may be useful for non-Python
usage of mpz.
This path for src->deg==NULL is never used because mpz_clone() is always
called with an argument that has a non-zero integer value, and hence has
some digits allocated to it (mpz_clone() is a static function private to
mpz.c all callers of this function first check if the integer value is zero
and if so take a special-case path, bypassing the call to mpz_clone()).
There is some unused and commented-out functions that may actually pass a
zero-valued mpz to mpz_clone(), so some TODOs are added to these function
in case they are needed in the future.
There is a finite list of ascending primes used for the size of a hash
table, and this test tests that the code can handle a dict larger than the
maximum value in that list of primes. Adding this tests gets py/map.c to
100% coverage.
All callers of the asm entry function guarantee that num_locals>=0, so no
need to add an explicit check for it. Use an assertion instead.
Also, the signature of asm_x86_entry is changed to match the other asm
entry functions.
This patch just moves the definition of the wrapper object fat_vfs_open_obj
to the location of the definition of its function, which matches how it's
done in most other places in the code base.
The fat_vfs_ilistdir2() function was only used by fat_vfs_ilistdir_func()
so moving the former into the same file as the latter allows it to be
placed directly into the latter function, thus saving code size.
These ports don't need anything from extmod so don't include those files
at all in the build. This speeds up the build by about 10% when building
with a single core.
If a port only needs the core files then it can now use the $(PY_CORE_O)
variable instead of $(PY_O). $(PY_EXTMOD_O) contains the list of extmod
files (including some files from lib/). $(PY_O) retains its original
definition as the list of all object file (including those for frozen code)
and is a convenience variable for ports that want everything.
Saves a few bytes of code space, and is more efficient because with
MICROPY_GC_CONSERVATIVE_CLEAR enabled by default all memory is already
cleared when allocated.
Otherwise passing -1 as maxlen will lead to a zero allocation and
subsequent unbound buffer overflow in deque.append() because i_put is
allowed to grow without bound.
So far, implements just append() and popleft() methods, required for
a normal queue. Constructor doesn't accept an arbitarry sequence to
initialize from (am empty deque is always created), so an empty tuple
must be passed as such. Only fixed-size deques are supported, so 2nd
argument (size) is required.
There's also an extension to CPython - if True is passed as 3rd argument,
append(), instead of silently overwriting the oldest item on queue
overflow, will throw IndexError. This behavior is desired in many
cases, where queues should store information reliably, instead of
silently losing some items.
Currently only the first 2 args are used, but this patch should at least
make getaddrinfo() signature-compatible with CPython and other bare-metal
ports that use the lwip bindings.
The micropython.stack_use() function is useful to query the current C stack
usage, and it's inclusion in the micropython module doesn't need to be tied
to the inclusion of mem_info()/qstr_info() because it doesn't rely on any
of the code from these functions. So this patch introduces the config
option MICROPY_PY_MICROPYTHON_STACK_USE which can be used to independently
control the inclusion of stack_use(). By default it is enabled if
MICROPY_PY_MICROPYTHON_MEM_INFO is enabled (thus not changing any of the
existing ports).
The new option is MICROPY_ENABLE_EXTERNAL_IMPORT and is enabled by default
so that the default behaviour is the same as before. With it disabled
import is only supported for built-in modules, not for external files nor
frozen modules. This allows to support targets that have no filesystem of
any kind and that only have access to pre-supplied built-in modules
implemented natively.
Prior to this patch uPy (on a 32-bit arch) would have severe issues when
calling bytes(-1): such a call would call vstr_init_len(vstr, -1) which
would then +1 on the len and call vstr_init(vstr, 0), which would then
round this up and allocate a small amount of memory for the vstr. The
bytes constructor would then attempt to zero out all this memory, thinking
it had allocated 2^32-1 bytes.
This patch changes the way REPL autocomplete finds matches. It now probes
the target object for all qstrs via mp_load_method_maybe to look for a
match with the given input string. Similar to how the builtin dir()
function works, this new algorithm now find all methods and instances of
user-defined classes including attributes of their parent classes. This
helps a lot at the REPL prompt for user-discovery and to autocomplete names
even for classes that are derived.
The downside is that this new algorithm is slower than the previous one,
and in particular will be slower the more qstrs there are in the system.
But because REPL autocomplete is primarily used in an interactive way it is
not that important to make it fast, as long as it is "fast enough" compared
to human reaction.
On a slow microcontroller (CPU running at 16MHz) the autocomplete time for
a list of 35 names in the outer namespace (pressing tab at a bare prompt)
takes about 160ms with this algorithm, compared to about 40ms for the
previous implementation (this time includes the actual printing of the
names as well). This time of 160ms is very reasonable especially given the
new functionality of listing all the names.
This patch also decreases code size by:
bare-arm: +0
minimal x86: -128
unix x64: -128
unix nanbox: -224
stm32: -88
cc3200: -80
esp8266: -92
esp32: -84
This patch improves the builtin dir() function by probing the target object
with all possible qstrs via mp_load_method_maybe. This is very simple (in
terms of implementation), doesn't require recursion, and allows to list all
methods of user-defined classes (without duplicates) even if they have
multiple inheritance with a common parent. The downside is that it can be
slow because it has to iterate through all the qstrs in the system, but
the "dir()" function is anyway mostly used for testing frameworks and user
introspection of types, so speed is not considered a priority.
In addition to providing a more complete implementation of dir(), this
patch is simpler than the previous implementation and saves some code
space:
bare-arm: -80
minimal x86: -80
unix x64: -56
unix nanbox: -48
stm32: -80
cc3200: -80
esp8266: -104
esp32: -64
This macro is written out explicitly in the two locations that it is used
and then the code is optimised, opening possibilities for further
optimisations and reducing code size:
unix: -48
minimal CROSS=1: -32
stm32: -32
Using the message "maximum recursion depth exceeded" for when the pystack
runs out of memory can be misleading because the pystack can run out for
reasons other than deep recursion (although in most cases pystack
exhaustion is probably indirectly related to deep recursion). And it's
important to give the user more precise feedback as to the reason for the
error: if they know precisely that the pystack was exhausted then they have
a chance to increase the amount of memory available to the pystack (as
opposed to not knowing if it was the C stack or pystack that ran out).
Also, C stack exhaustion is more serious than pystack exhaustion because it
could have been that the C stack overflowed and overwrote/corrupted some
data and so the system must be restarted. The pystack can never corrupt
data in this way so pystack exhaustion does not require a system restart.
Knowing the difference between these two cases is therefore important.
The actual exception type for pystack exhaustion remains as RuntimeError so
that programatically it behaves the same as a C stack exhaustion.
By adding __builtin_unreachable() at the end of nlr_push, we're
essentially telling the compiler that this function will never return.
When GCC LTO is in use, this means that any time nlr_push() is called
(which is often), the compiler thinks this function will never return
and thus eliminates all code following the call.
Note: I've added a 'return 0' for older GCC versions like 4.6 which
complain about not returning anything (which doesn't make sense in a
naked function). Newer GCC versions (tested 4.8, 5.4 and some others)
don't complain about this.
This constant exception instance was once used by m_malloc_fail() to raise
a MemoryError without allocating memory, but it was made obsolete long ago
by 3556e45711. The functionality is now
replaced by the use of mp_emergency_exception_obj which lives in the global
uPy state, and which can handle any exception type, not just MemoryError.
This feature is not often used so is guarded by the config option
MICROPY_PY_BUILTINS_RANGE_BINOP which is disabled by default. With this
option disabled MicroPython will always return false when comparing two
range objects for equality (unless they are exactly the same object
instance). This does not match CPython so if (in)equality between range
objects is needed then this option should be enabled.
Enabling this option costs between 100 and 200 bytes of code space
depending on the machine architecture.
This patch provides inline versions of the utf8 helper functions for the
case when unicode is disabled (MICROPY_PY_BUILTINS_STR_UNICODE set to 0).
This saves code size.
The unichar_charlen function is also renamed to utf8_charlen to match the
other utf8 helper functions, and the signature of this function is adjusted
for consistency (const char* -> const byte*, mp_uint_t -> size_t).
Instead of putting just 'CRASH' in the .py.out file, this patch makes it so
any output from uPy that led to the crash is stored in the .py.out file, as
well as the 'CRASH' message at the end.
Prior to this patch, storage.c was a combination of code that handled
either internal flash or external SPI flash and exposed one of them as a
block device for the local storage. It was also exposed to the USB MSC.
This patch splits out the flash and SPI code to separate files, which each
provide a general block-device interface (at the C level). Then storage.c
just picks one of them to use as the local storage medium. The aim of this
factoring is to allow to add new block devices in the future and allow for
easier configurability.
This patch allows to completely compile-out support for USB, and no-USB is
now the default. If a board wants to enable USB it should define:
#define MICROPY_HW_ENABLE_USB (1)
And then one or more of the following to select the USB PHY:
#define MICROPY_HW_USB_FS (1)
#define MICROPY_HW_USB_HS (1)
#define MICROPY_HW_USB_HS_IN_FS (1)
Newer versions of the HAL use names which are cleaner and more
self-consistent amongst the HAL itself. This patch switches to use those
names in most places so it is easier to update the HAL in the future.
Prior to this patch the USBD driver did not handle the recipient correctly
for setup requests. It was not interpreting the req->wIndex field in the
right way: in some cases this field indicates the endpoint number but the
code was assuming it always indicated the interface number.
This patch fixes this. The only noticeable change is to the MSC
interface, which should now correctly respond to the USB_REQ_CLEAR_FEATURE
request and hence unmount properly from the host when requested.
mpconfigboard_common.h now sets the defaults so there is no longer a need
to explicitly list all configuration options in a board's mpconfigboard.h
file.
This file mirrors py/mpconfig.h but for board-level config options. It
provides a default configuration, to be overridden by a specific
mpconfigboard.h file, as well as setting up certain macros to automatically
configure a board.
Prior to this patch, a float literal that was close to subnormal would
have a loss of precision when parsed. The worst case was something like
float('10000000000000000000e-326') which returned 0.0.
This patch simplifies how sentinel values are stored on the stack when
doing an unwind return or jump. Instead of storing two values on the stack
for an unwind jump it now stores only one: a negative small integer means
unwind-return and a non-negative small integer means unwind-jump with the
value being the number of exceptions to unwind. The savings in code size
are:
bare-arm: -56
minimal x86: -68
unix x64: -80
unix nanbox: -4
stm32: -56
cc3200: -64
esp8266: -76
esp32: -156
The array should be of type unsigned byte because that is the type of the
values being stored. And changing to uint8_t helps to prevent warnings
from some static analysers.
Note that the check for elem!=NULL is removed for the
MP_MAP_LOOKUP_ADD_IF_NOT_FOUND case because mp_map_lookup will always
return non-NULL for such a case.
The calls to rtc_init_start(), sdcard_init() and storage_init() are all
guarded by a check for first_soft_reset, so it's simpler to just put them
all before the soft-reset loop, without the check.
The call to machine_init() can also go before the soft-reset loop because
it is only needed to check the reset cause which can happen once at the
first boot. To allow this to work, the reset cause must be set to SOFT
upon a soft-reset, which is the role of the new function machine_deinit().
Upon boot the RTC early-init function should detect if LSE or LSI is
already selected/running and, if so, use it. When the LSI has previously
(in the previous reset cycle) been selected as the clock source the only
way to reliably tell is if the RTCSEL bits of the RCC_BDCR are set to the
correct LSI value. In particular the RCC_CSR bits for LSI control do not
indicate if the LSI is ready even if it is selected.
This patch removes the check on the RCC_CSR bits for the LSI being on and
ready and only uses the check on the RCC_BDCR to see if the LSI should be
used straightaway. This was tested on a PYBLITEv1.0 and with the patch the
LSI persists correctly as the RTC source as long as the backup domain
remains powered.
Previously, if LSE is selected but fails and the RTC falls back to LSI,
then the rtc_info flags would incorrectly state that LSE is used. This
patch fixes that by setting the bit in rtc_info only after the clock is
ready.
There is an underlying hardware SPI driver (built on top of the STM HAL)
and then on top of this sits the legacy pyb.SPI class as well as the
machine.SPI class. This patch improves the separation between these
layers, in particular decoupling machine.SPI from pyb.SPI.
This patch combines the compiler optimisation code for double and triple
tuple-to-tuple assignment, taking it from two separate if-blocks to one
combined if-block. This can be done because the code for both of these
optimisations has a lot in common. Combining them together reduces code
size for ports that have the triple-tuple optimisation enabled (and doesn't
change code size for ports that have it disabled).
The SPI sub-system is independent from the uPy state (eg the heap) and so
can safely persist across a soft reset. And this is actually necessary for
drivers that rely on SPI and that also need to persist across soft reset
(eg external SPI flash memory).
This patch adds support in the USBD configuration and CDC-MSC-HID class for
high-speed USB mode. To enable it the board configuration must define
USE_USB_HS, and either not define USE_USB_HS_IN_FS, or be an STM32F723 or
STM32F733 MCU which have a built-in HS PHY. High-speed mode is then
selected dynamically by passing "high_speed=True" to the pyb.usb_mode()
function, otherwise it defaults to full-speed mode.
This patch has been tested on an STM32F733.
By defining MICROPY_HW_USB_MAIN_DEV a given board can select to use either
USB_PHY_FS_ID or USB_PHY_HS_ID as the main USBD peripheral, on which the
REPL will appear. If not defined this will be automatically configured.
There's no need to have these as separate functions, they just take up
unnecessary code space and combining them allows to factor common code, and
also allows to support arbitrary string descriptor indices.
The routine waits for the DMA to finish, which is signalled from a DMA IRQ
handler. Using WFI makes the CPU sleep while waiting for the IRQ to arrive
which decreases power consumption. To make it work correctly the check for
the change in state must be atomic and so IRQs must be disabled during the
check. The key feature of the Cortex MCU that makes this possible is that
WFI will exit when an IRQ arrives even if IRQs are disabled.
CPython doesn't allow SEEK_CUR with non-zero offset for files in text mode,
and uPy inherited this behaviour for both text and binary files. It makes
sense to provide full support for SEEK_CUR of binary-mode files in uPy, and
to do this in a minimal way means also allowing to use SEEK_CUR with
non-zero offsets on text-mode files. That seems to be a fair compromise.
Build and test 32bit and 64bit versions of the windows port using gcc
from mingw-w64. Note a bunch of tests which rely on floating point
math/printing have been disabled for now since they fail.
This commit fixes two things:
1. Do not allocate on the heap in readblocks() - unless the block size
is bigger than 512 bytes.
2. Raise an error instead of returning 1 to indicate an error: the FAT
block device layer does not check the return value. And other
backends (e.g. esp32 blockdev) also raise an error instead of
returning non-zero.
The number of registers used should be 10, not 12, to match the assembly
code in nlrx64.c. With this change the 64bit mingw builds don't need to
use the setjmp implementation, and this fixes miscellaneous crashes and
assertion failures as reported in #1751 for instance.
To avoid mistakes in the future where something gcc-related for Windows
only gets fixed for one particular compiler/environment combination,
make use of a MICROPY_NLR_OS_WINDOWS macro.
To make sure everything nlr-related is now ok when built with gcc this
has been verified with:
- unix port built with gcc on Cygwin (i686-pc-cygwin-gcc and
x86_64-pc-cygwin-gcc, version 6.4.0)
- windows port built with mingw-w64's gcc from Cygwin
(i686-w64-mingw32-gcc and x86_64-w64-mingw32-gcc, version 6.4.0)
and MSYS2 (like the ones on Cygwin but version 7.2.0)
Add some features which are already enabled in the unix port and
default to using the Python stack for scoped allocations: this can be
more performant in cases the heap is heavily used because for example
the memory needed for storing *args and **kwargs doesn't require
scanning the heap to find a free block.
For MSVC off_t is defined in sys/types.h but according to the comment
earlier in mpconfigport.h this cannot be included directly.
So just make off_t the same as mp_off_t.
This fixes the build for MSVC with MICROPY_STREAMS_POSIX_API
enabled because stream.h uses off_t.
There are two checks that are always false so can be converted to (negated)
assertions to save code space and execution time. They are:
1. The check of the str parameter, which is required to be non-NULL as per
the original comment that it has enough space in it as calculated by
mp_int_format_size. And for all uses of this function str is indeed
non-NULL.
2. The check of the base parameter, which is already required to be between
2 and 16 (inclusive) via the assertion in mp_int_format_size.
The motivation behind this patch is to remove unreachable code in mpn_div.
This unreachable code was added some time ago in
9a21d2e070, when a loop in mpn_div was copied
and adjusted to work when mpz_dig_t was exactly half of the size of
mpz_dbl_dig_t (a common case). The loop was copied correctly but it wasn't
noticed at the time that the final part of the calculation of num-quo*den
could be optimised, and hence unreachable code was left for a case that
never occurred.
The observation for the optimisation is that the initial value of quo in
mpn_div is either exact or too large (never too small), and therefore the
subtraction of quo*den from num may subtract exactly enough or too much
(but never too little). Using this observation the part of the algorithm
that handles the borrow value can be simplified, and most importantly this
eliminates the unreachable code.
The new code has been tested with DIG_SIZE=3 and DIG_SIZE=4 by dividing all
possible combinations of non-negative integers with between 0 and 3
(inclusive) mpz digits.
Empty __VA_ARGS__ are not allowed in the C preprocessor so adjust the rule
arg offset calculation to not use them. Also, some compilers (eg MSVC)
require an extra layer of macro expansion.
This is the sixth and final patch in a series of patches to the parser that
aims to reduce code size by compressing the data corresponding to the rules
of the grammar.
Prior to this set of patches the rules were stored as rule_t structs with
rule_id, act and arg members. And then there was a big table of pointers
which allowed to lookup the address of a rule_t struct given the id of that
rule.
The changes that have been made are:
- Breaking up of the rule_t struct into individual components, with each
component in a separate array.
- Removal of the rule_id part of the struct because it's not needed.
- Put all the rule arg data in a big array.
- Change the table of pointers to rules to a table of offsets within the
array of rule arg data.
The last point is what is done in this patch here and brings about the
biggest decreases in code size, because an array of pointers is now an
array of bytes.
Code size changes for the six patches combined is:
bare-arm: -644
minimal x86: -1856
unix x64: -5408
unix nanbox: -2080
stm32: -720
esp8266: -812
cc3200: -712
For the change in parser performance: it was measured on pyboard that these
six patches combined gave an increase in script parse time of about 0.4%.
This is due to the slightly more complicated way of looking up the data for
a rule (since the 9th bit of the offset into the rule arg data table is
calculated with an if statement). This is an acceptable increase in parse
time considering that parsing is only done once per script (if compiled on
the target).
Instead of each rule being stored in ROM as a struct with rule_id, act and
arg, the act and arg parts are now in separate arrays and the rule_id part
is removed because it's not needed. This reduces code size, by roughly one
byte per grammar rule, around 150 bytes.
The rule name is only used for debugging, and this patch makes things a bit
cleaner by completely separating out the rule name from the rest of the
rule data.
Each NLR implementation (Thumb, x86, x64, xtensa, setjmp) duplicates a lot
of the NLR code, specifically that dealing with pushing and popping the NLR
pointer to maintain the linked-list of NLR buffers. This patch factors all
of that code out of the specific implementations into generic functions in
nlr.c, along with a helper macro in nlr.h. This eliminates duplicated
code.
If MICROPY_NLR_SETJMP is not enabled and the machine is auto-detected then
nlr.h now defines some convenience macros for the individual NLR
implementations to use (eg MICROPY_NLR_THUMB). This keeps nlr.h and the
implementation in sync, and also makes the nlr_buf_t struct easier to read.
A function with a naked attribute must only contain basic inline asm
statements and no C code.
For nlr_push this means removing the "return 0" statement. But for some
gcc versions this induces a compiler warning so the __builtin_unreachable()
line needs to be added.
For nlr_jump, this function contains a combination of C code and inline asm
so cannot be naked.
This reverts commit 6a3a742a6c.
The above commit has number of faults starting from the motivation down
to the actual implementation.
1. Faulty implementation.
The original code contained functions like:
NORETURN void nlr_jump(void *val) {
nlr_buf_t **top_ptr = &MP_STATE_THREAD(nlr_top);
nlr_buf_t *top = *top_ptr;
...
__asm volatile (
"mov %0, %%edx \n" // %edx points to nlr_buf
"mov 28(%%edx), %%esi \n" // load saved %esi
"mov 24(%%edx), %%edi \n" // load saved %edi
"mov 20(%%edx), %%ebx \n" // load saved %ebx
"mov 16(%%edx), %%esp \n" // load saved %esp
"mov 12(%%edx), %%ebp \n" // load saved %ebp
"mov 8(%%edx), %%eax \n" // load saved %eip
"mov %%eax, (%%esp) \n" // store saved %eip to stack
"xor %%eax, %%eax \n" // clear return register
"inc %%al \n" // increase to make 1, non-local return
"ret \n" // return
: // output operands
: "r"(top) // input operands
: // clobbered registers
);
}
Which clearly stated that C-level variable should be a parameter of the
assembly, whcih then moved it into correct register.
Whereas now it's:
NORETURN void nlr_jump_tail(nlr_buf_t *top) {
(void)top;
__asm volatile (
"mov 28(%edx), %esi \n" // load saved %esi
"mov 24(%edx), %edi \n" // load saved %edi
"mov 20(%edx), %ebx \n" // load saved %ebx
"mov 16(%edx), %esp \n" // load saved %esp
"mov 12(%edx), %ebp \n" // load saved %ebp
"mov 8(%edx), %eax \n" // load saved %eip
"mov %eax, (%esp) \n" // store saved %eip to stack
"xor %eax, %eax \n" // clear return register
"inc %al \n" // increase to make 1, non-local return
"ret \n" // return
);
for (;;); // needed to silence compiler warning
}
Which just tries to perform operations on a completely random register (edx
in this case). The outcome is the expected: saving the pure random luck of
the compiler putting the right value in the random register above, there's
a crash.
2. Non-critical assessment.
The original commit message says "There is a small overhead introduced
(typically 1 machine instruction)". That machine instruction is a call
if a compiler doesn't perform tail optimization (happens regularly), and
it's 1 instruction only with the broken code shown above, fixing it
requires adding more. With inefficiencies already presented in the NLR
code, the overhead becomes "considerable" (several times more than 1%),
not "small".
The commit message also says "This eliminates duplicated code.". An
obvious way to eliminate duplication would be to factor out common code
to macros, not introduce overhead and breakage like above.
3. Faulty motivation.
All this started with a report of warnings/errors happening for a niche
compiler. It could have been solved in one the direct ways: a) fixing it
just for affected compiler(s); b) rewriting it in proper assembly (like
it was before BTW); c) by not doing anything at all, MICROPY_NLR_SETJMP
exists exactly to address minor-impact cases like thar (where a) or b) are
not applicable). Instead, a backwards "solution" was put forward, leading
to all the issues above.
The best action thus appears to be revert and rework, not trying to work
around what went haywire in the first place.
These were copied from the stm32 port (then stmhal) at the very beginning
of this port, with the anticipation that the esp8266 port would have board
definition files with a list of valid pins and their names. But that has
not been implemented and likely won't be, so remove the corresponding lines
from the Makefile.
This patch adds in internal config value MICROPY_HW_ENABLE_HW_I2C that is
automatically configured, and enabled only if one or more hardware I2C
ports are defined in the mpconfigboard.h file. If none are defined then
the pyb.I2C class is excluded from the build, along with all supporting
code. The machine.I2C class will still be available for software I2C.
Disabling all hardware I2C on an F4 board saves around 10,000 bytes of code
and 200 bytes of RAM.
Each NLR implementation (Thumb, x86, x64, xtensa, setjmp) duplicates a lot
of the NLR code, specifically that dealing with pushing and popping the NLR
pointer to maintain the linked-list of NLR buffers. This patch factors all
of that code out of the specific implementations into generic functions in
nlr.c. This eliminates duplicated code.
The factoring also allows to make the machine-specific NLR code pure
assembler code, thus allowing nlrthumb.c to use naked function attributes
in the correct way (naked functions can only have basic inline assembler
code in them).
There is a small overhead introduced (typically 1 machine instruction)
because now the generic nlr_jump() must call nlr_jump_tail() rather than
them being one combined function.
set_equal is called only from set_binary_op, and this guarantees that the
second arg to set_equal is always a set or frozenset. So there is no need
to do a further check.
Previously, testing of stackless build happened (manually) in
travis-stackless branch. However, stackless offers important
featureset, so it's worth to test it as a part of the main
CI. Strict stackless is used because it's the "real" stackless
build, which avoids using C stack as much as possible (non-strict
just prefers heap over C stack, but may end up using the latter).
This implements .pend_throw(exc) method, which sets up an exception to be
triggered on the next call to generator's .__next__() or .send() method.
This is unlike .throw(), which immediately starts to execute the generator
to process the exception. This effectively adds Future-like capabilities
to generator protocol (exception will be raised in the future).
The need for such a method arised to implement uasyncio wait_for() function
efficiently (its behavior is clearly "Future" like, and normally would
require to introduce an expensive Future wrapper around all native
couroutines, like upstream asyncio does).
py/objgenerator: pend_throw: Return previous pended value.
This effectively allows to store an additional value (not necessary an
exception) in a coroutine while it's not being executed. uasyncio has
exactly this usecase: to mark a coro waiting in I/O queue (and thus
not executed in the normal scheduling queue), for the purpose of
implementing wait_for() function (cancellation of such waiting coro
by a timeout).
tinytest is written with the idea that tests won't write to stdout, so it
prints test name witjout newline, then executes test, then writes status.
But MicroPython tests write to stdout, so the test output becomes a mess.
So, instead print it like:
# starting basics/andor.py
... test output ...
basics/andor.py: OK
If TEST is defined, file it refers to will be used as the testsuite
source (should be generated with tools/tinytest-codegen.py).
"make-bin-testsuite" script is introduce to build such a binary.
The whole idea of --list-tests is that we prepare a list of tests to run
later, and currently don't have a connection to target board. Similarly
for --write-exp - only "python3" binary would be required for this operation,
not "micropython".
Some compilers can treat enum types as signed, in which case 3 bits is not
enough to encode all mp_raw_code_kind_t values. So change the type to
mp_uint_t.
Instead of passing thru more and more options from tinytest-codegen to
run-tests --list-tests, pipe output of run-tests --list-tests into
tinytest-codegen.
The idea that --list-tests would be enough to produce list of tests for
tinytest-codegen didn't work, because normal run-tests processing heavily
relies on dynamic target capabilities discovery, and test filtering happens
as the result of that.
So, approach the issue from different end - allow to specify arbitrary
filtering criteria as run-tests arguments. This way, specific filters
will be still hardcoded, but at least on a particular target's side,
instead of constant patching tinytest-codegen and/or run-tests.
Because otherwise the function can return with data still waiting to be
clocked out, and CS might then be disabled before the SPI transaction is
complete. Fixes issue #3487.
Gets passed to run-tests --list-tests to get actual list of tests to use.
If --target= is not given, legacy set hardcoded in tinytest-codegen itself
is used.
Also, get rid of tinytest test groups - they aren't really used for
anything, and only complicate processing. Besides, one of the next
step is to limit number of tests per a generated file to control
the binary size, which also will require "flat" list of tests.
Lists tests to be executed, subject to all other filters requested. This
options would be useful e.g. for scripts like tools/tinytest-codegen.py,
which currently contains hardcoded filters for particular a particular
target and can't work for multiple targets.
The way tinytest was used in qemu-arm test target is that it didn't test
much. MicroPython tests are based on matching the test output against
reference output, but qemu-arm's implementation didn't do that, it
effectively tested just that there was no exception during test
execution. "upytesthelper" wrapper was introduce to fix it, so switch
test implementation to use it.
This requires passing different CFLAGS when building the firmware, so
split out test-related parts to Makefile.test.
The way tinytest was used in qemu-arm test target is that it didn't test
much. MicroPython tests are based on matching the test output against
reference output, but qemu-arm's implementation didn't do that, it
effectively tested just that there was no exception during test
execution. "upytesthelper" wrapper was introduce to fix it, and so
test generator is now switched to generate test code for it.
Also, fix PEP8 and other codestyle issues.
Tinytest is classical assert-style framework, but MicroPython tests work
in different way - they produce content, and that content should be matched
against expected one to see if test passes. upytesthelper exactly adds
helper functions to make that possible.
Code lineage:
osdebug() is based loosely on the version in esp8266, but there didn't
seem to be an obvious way of choosing a particular UART. The basic
behavior is the same, though: provide None, and logging is disabled;
provide an integer and logging is restored to the default level.
To build on that, and because the IDF provides more functionality, a
second parameter has now been implemented which allows the active log
level to be set:
esp.osdebug(uart[, level])
The module has a corresponding set of LOG_ values to set this accordingly.
When configuring a static set of values with ifconfig() the DNS was not
being set. This patch fixes that, and additionally uses the tcpip_adapter
API to ensure it is thread safe.
Further discussion is here:
https://github.com/micropython/micropython-esp32/issues/210/
This commit is a combination of 216 commits from the initial stages of
development of this port, up to and including the point where the code was
moved to the ports/esp32 directory. These commits were mostly concerned
with setting up the build system and getting a reliable port working with
basic features. The following is a digest of the original commits in their
original order (most recent listed first), grouped where possible per
author. The list is here to give credit for the work and provide some
level of traceability and accountability. For the full history of
development please consult the original repository.
All code is MIT licensed and the relevant copyright holders are listed in
the comment-header of each file.
Damien George <damien.p.george@gmail.com>
esp32: Update to latest ESP IDF.
esp32: Update module symlinks now that code lives under the ports dir.
esp32: Update to compile with new ports/esp32 directory structure.
esp32: Move it to the ports/ directory.
esp32/machine_uart: Don't save baudrate but compute it instead.
esp32/modsocket: Add socket.readinto() method.
esp32/modesp: Add esp.gpio_matrix_in and esp.gpio_matrix_out functions.
esp32/machine_uart: Wait for all data to be tx'd before changing config.
NyxCode <moritz.bischof1@gmail.com>
esp32: Add note to README.md about updating the submodules of ESP IDF.
Anthony Briggs <anthony.briggs@gmail.com>
esp32: Update README.md installation and flashing instructions.
Javier Candeira <javier@candeira.com>
esp32: Raise error when setting input-only pin to output.
With help from Adrian Smith (fon@thefon.net)
Javier Candeira <javier@candeira.com>
esp32: Replace exception raising with corresponding mp_raise_XXX funcs.
Tisham Dhar <whatnickd@gmail.com>
esp32: Add some specific notes about building on Windows using WSL.
Ben Gamari <ben@smart-cactus.org>
esp32: Provide machine.Signal class.
Damien George <damien.p.george@gmail.com>
esp32/modnetwork: Implement AP version of network.isconnected().
Eric Poulsen <eric@zyxod.com>
esp32/README.md: Add note about btree submodule initialization.
Damien George <damien.p.george@gmail.com>
esp32: Make firmware.bin start at 0x1000 to allow flash size autodetect.
esp32: Changes to follow latest version of upstream uPy.
esp32/Makefile: Separate ESP-specific inc dirs to avoid header clashes.
esp32: Enable "btree" database module.
esp32: Update to latest ESP IDF.
Roosted7 <thomasroos@live.nl>
esp32: Update to latest ESP-IDF.
Alex King <alex_w_king@yahoo.com>
esp32/machine_timer: Add support for esp32 hardware timer.
Code lineage:
Timer() is based loosely on the version in esp8266, although the
implementation is differs significantly because of the change in
the underlying platform.
Damien George <damien.p.george@gmail.com>
esp32/machine_uart: Increase UART TX buffer size to 64.
esp32/modules: Update dht symlink.
esp32/mpconfigport.h: Enable utimeq module, needed for uasyncio.
esp32: Changes to follow latest version of upstream uPy.
esp32: Update to latest ESP-IDF.
esp32/machine_uart: Add uart.any() method.
esp32/machine_uart: Uninstall the UART driver before installing it.
Thomas Roos <mail@thomasroos.nl>
esp32: Update to latest ESP-IDF.
Eric Poulsen <eric@zyxod.com>
esp32/modsocket: Make read/write return None when in non-blocking mode.
esp32/modsocket.c: Fix send/sendto/write for non-blocking sockets.
Odd Stråbø <oddstr13@openshell.no>
esp32: Initial working implementation of machine.UART.
Code lineage (as seen by previous commits): I copied the ESP8266 code,
renamed pyb -> machine, and used esp-idf as a reference while implementing
minimal functionality. I provide all of my changes under the MIT license.
Odd Stråbø <oddstr13@openshell.no>
esp32/machine_uart: Rename pyb to machine.
esp32: Copy machine_uart.c from esp8266 port.
Damien George <damien.p.george@gmail.com>
esp32/moduos: Add uos.ilistdir() function.
esp32: Mount filesystem at the root of the VFS, following esp8266.
Andy Valencia <vandyswa@gmail.com>
esp32: Add hardware SHA1/SHA256 support via mbedtls API.
Code lineage: a copy of extmod/moduhashlib with the API invocation details
edited. Total derivative work.
Andy Valencia <vandyswa@gmail.com>
esp32: Add PWM support via machine.PWM class.
Code lineage:
I started by copying the esp8266 machine_pwm.c. I used information from the
ESP32 Technical Reference Manual, the esp-idf documentation, and the SDK's
sample ledc example code (but I did not copy that code, just studied it to
understand the SDK's API for PWM). So aside from the code copied from the
esp8266 PWM support, everything else you see is just new code I wrote.
I wasn't an employee of anybody when I wrote it, and I wrote it with the
understanding and intention that it's simply a derivative work of the
existing micropython code. I freely and willingly contribute it to the
project and intend that it not change the legal status of the micropython
code base in any way, even if it is included in that base in whole or part.
Damien George <damien.p.george@gmail.com>
esp32/modules: Add symlinks for upysh and upip.
Eric Poulsen <eric@zyxod.com>
esp32/modmachine: Add unique_id() function to machine module.
Damien George <damien.p.george@gmail.com>
esp32: Change dac_out_voltage to dac_output_voltage for new IDF API.
esp32: Update esp32.custom_common.ld to align with changes in ESP IDF.
Eric Poulsen <eric@zyxod.com>
esp32: Update to latest ESP IDF.
Damien George <damien.p.george@gmail.com>
esp32/modsocket: When resolving IP addr handle the case of host=''.
esp32: Update to latest ESP IDF.
Eric Poulsen <eric@zyxod.com>
esp32/Makefile: Change default FLASH_MODE to dio for WROOM-32 module.
esp32: Move FAT FS to start at 0x200000 and increase size to 2MiB.
Damien George <damien.p.george@gmail.com>
esp32: Remove enable_irq/disable_irq and use ATOMIC_SECTION instead.
esp32/mpconfigport.h: Provide ATOMIC_SECTION macros.
esp32/main: Restart the MCU if there is a failed NLR jump.
Daniel Campora <daniel@pycom.io>
esp32: Enable threading; be sure to exit GIL when a thread will block.
esp32: Trace the registers when doing a gc collect. Also make it thread ready.
esp32: Add threading implementation, disabled for the time being.
Damien George <damien.p.george@gmail.com>
esp32: Update to latest ESP IDF.
esp32/uart: Use high-level function to install UART0 RX ISR handler.
esp32/Makefile: Make FreeRTOS private include dir really private.
Eric Poulsen <eric@zyxod.com>
esp32: Add support for hardware SPI peripheral (block 1 and 2).
Sergio Conde Gómez <skgsergio@gmail.com>
esp32/modules/inisetup.py: Mount filesystem at /flash like ESP8266
Damien George <damien.p.george@gmail.com>
esp32: Convert to use core-provided KeyboardInterrupt exception.
esp32: Pump the event loop while waiting for rx-chr or delay_ms.
esp32: Implement Pin.irq() using "soft" scheduled interrupts.
esp32: Update to latest ESP IDF version.
Eric Poulsen <eric@zyxod.com>
esp32/README: Add troubleshooting section to the end.
tyggerjai <tyggerjai@gmail.com>
esp32: Add support for WS2812 and APA106 RGB LEDs.
Damien George <damien.p.george@gmail.com>
esp32: Add makeimg.py script to build full firmware; use it in Makefile.
esp32/modsocket: Make socket.read return when socket closes.
esp32/modsocket: Initialise the timeout on an accepted socket.
esp32/mphalport: Provide proper implementations of disable_/enable_irq.
esp32/modmachine: Add disable_irq/enable_irq functions.
Nick Moore <nick@zoic.org>
esp32/modsocket.c: add comment explaining timeout behaviour
esp32/modsocket.c: clean up send methods for retries too
esp32/modsocket.c: sockets always nonblocking, accept timeout in modsocket
esp32: Update to latest ESP IDF version.
esp32/modsocket.c: remove MSG_PEEK workaround on select ioctl.
esp32/modsocket.c: Initialize tcp when modsocket loads.
Damien George <damien.p.george@gmail.com>
esp32/main: Bump heap size from 64k to 96k.
esp32/modutime: Add time.time() function.
esp32/modsocket: Convert lwip errnos to uPy ones.
esp32/modules: Provide symlink to ds18x20 module.
esp32: Add support for onewire protocol via OneWire module.
esp32: Add support for DHT11 and DHT22 sensors.
esp32/mphalport: Improve delay and ticks functions.
esp32: Update to latest ESP IDF.
esp32/modules: Provide symlink to urequests from micropython-lib.
esp32: Populate sys.path.
Nick Moore <nick@zoic.org>
esp32/machine_dac.c: implement DAC pins as well
esp32/machine_adc.c: also machine.ADC
esp32/machine_touchpad.c: add support for touchpad
Damien George <damien.p.george@gmail.com>
esp32/README: Add hint about using GNUmakefile on case-insensitive FS.
esp32/mpconfigport.h: Enable maximum speed software SPI.
esp32: Provide improved version of mp_hal_delay_us_fast.
esp32/mpconfigport.h: Enable MICROPY_PY_BUILTINS_POW3 option.
esp32: Update to latest ESP IDF.
esp32: Convert to use new oofatfs library and generic VFS sub-system.
esp32: Enable help('modules') to list builtin modules.
esp32: Convert to use new builtin help function.
Aaron Kelly <AaronKelly@email.com>
esp32/README: Add comment about ESP-IDF version
Damien George <damien.p.george@gmail.com>
esp32: Consistently use size_t instead of mp_uint_t.
esp32: Change "Micro Python" to "MicroPython" in license comments.
esp32/Makefile: Use -C argument to git instead of cd'ing.
esp32/help: Add section to help about using the network module.
esp32/README: Add section about configuring and using an ESP32 board.
esp32/README: Remove paragraph about buggy toolchain, it's now fixed.
esp32/modnetwork: Change network init logging from info to debug.
esp32/modnetwork: Don't start AP automatically when init'ing wifi.
esp32/modsocket: Implement socket.setsockopt, to support SO_REUSEADDR.
esp32: Update to latest ESP IDF.
esp32/sdkconfig.h: Remove unused CONFIG_ESPTOOLPY_xxx config settings.
esp32/modsocket: Add support for DGRAM and RAW, and sendto/recvfrom.
esp32/modsocket: Fix return value of "port" in socket.accept.
esp32/modsocket: Make socket.recv take exactly 2 args.
esp32: Enable ussl module, using mbedtls component from ESP IDF.
esp32/modsocket: Rename "socket" module to "usocket".
esp32/sdkconfig: Increase max number of open sockets from 4 to 8.
esp32/modsocket: Add error checking for creating and closing sockets.
esp32/modsocket: Use _r (re-entrant) versions of LWIP socket API funcs.
esp32/modsocket: Raise an exception if socket.connect did not succeed.
esp32/modsocket: Make socket.accept return a tuple: (client, addr).
esp32/modsocket: Use m_new_obj_with_finaliser instead of calloc.
esp32/Makefile: Add check for IDF version, and warn if not supported.
esp32/esp32.custom_common.ld: Update to follow changes in IDF.
esp32: Update to latest ESP IDF.
nubcore <x@nubcore.com>
esp32: add #define CONFIG_ESP32_WIFI_RX_BUFFER_NUM 25
Nick Moore <nick@zoic.org>
esp32/modsocket.c: add in sendall and makefile methods #10
esp32/modsocket.c: fixups for #10
esp32/modsocket.c: fix copyright, socket_recv gets param and exception
esp32/modnetwork.c: fix copyright, network.active param to bool
Damien George <damien.p.george@gmail.com>
esp32/modnetwork: Implement wlan.isconnected() method.
esp32/modnetwork: Add initial implementation of wlan.config().
esp32/modnetwork: Simplify event_handler messages.
Nick Moore <nick@zoic.org>
esp32/modsocket.c: support for ioctl, settimeout, setblocking, getaddrinfo
Damien George <damien.p.george@gmail.com>
esp32/README: Add comment about FLASH_MODE being dio.
esp32: Update to latest ESP IDF.
esp32/modnetwork: Remove unnecessary indirection variable for scan list.
esp32/modnetwork: Check that STA is active before trying to scan.
esp32/mphalport: Replace portTICK_RATE_MS with portTICK_PERIOD_MS.
esp32/README: Add comment about using $(HOME) in makefile.
esp32/modnetwork: Use memset instead of bzero, the latter is deprecated.
esp32/modnetwork: Improve error handling when STA is connecting to AP.
esp32/Makefile: Use tab instead of spaces, and use shorter variable.
Nick Moore <nick@zoic.org>
esp32/modsocket.c: AF_*, SOCK_* and IPPROTO_* constants
esp32/modsocket.c: socket.settimeout implementation
Damien George <damien.p.george@gmail.com>
esp32/Makefile: Update to latest ESP IDF.
Nick Moore <nick@zoic.org>
esp32/modsocket.c: use mp streams for sockets
esp32: network.WLAN.ifconfig based on esp8266 version
esp32: Fix up exception handling
esp32: sketchy modsocket ... revisit this once modnetwork is sorted
esp32: First cut at modnetwork, manually rebased from prev. version
Damien George <damien.p.george@gmail.com>
esp32/help: Update help text.
esp32: Add info about Microbric Pty Ltd being the sponsor of this port.
esp32: Add README.md file.
esp32/mpconfigport.h: Add weak links to many of the builtin modules.
esp32: Enable soft implementation of machine.SPI class.
esp32/Makefile: Simplify APP_LD_ARGS by using OBJ variable.
esp32/Makefile: Reorganise Makefile and add some comments.
esp32/Makefile: Clean up CFLAGS for ESP IDF components.
esp32/Makefile: Tidy up names of ESP IDF components, to match dir name.
esp32/Makefile: Define and use ESPCOMP variable.
esp32: Update to latest ESP IDF.
esp32/main: Enable filesystem support.
esp32: Use custom ld script to ensure correct code get placed in iram.
esp32: Update to latest ESP IDF.
esp32/main: Pin the uPy task to core 0.
esp32: Update to use latest ESP IDF.
esp32: Disable boot-up scripts, spi_flash_erase_sector no longer works.
esp32: Add scripts to init and mount filesystem.
esp32: Enable frozen bytecode, with scripts stored in "modules/" dir.
esp32/modesp: Increase flash_user_start position to 1Mbyte.
esp32/Makefile: Add "erase" target for convenient erasure.
esp32/sdkconfig: Reorder config settings to put common things together.
esp32/sdkconfig: Change to use single core only.
esp32: Add esp module.
esp32/uart.c: Make sure uart ISR handler is all in iram.
esp32/main.c: Use ESP_TASK_PRIO_MIN + 1 for mp_task's priority.
esp32/Makefile: Use only bare-minimum flags when compiling .S files.
esp32/Makefile: Rename "firmware" to "application".
esp32: Update ESP IDF version.
esp32/Makefile: Add declarations to build bootloader and partitions.
esp32/Makefile: When deploying, write the application last.
esp32/Makefile: Use $(INC) variable instead of listing include dirs.
esp32/Makefile: Use locally built versions of freertos and newlib libs.
esp32: Add low-level uart handler with ISR and ringbuf for stdin.
esp32: Add machine.idle() function.
esp32: Add machine.I2C class.
esp32: Enable machine.time_pulse_us.
esp32: Add initial implementation of machine.Pin class.
esp32: Prepare main.c for using xTaskCreateStatic.
esp32: Clean up mphalport.h.
esp32: Add initial uos module.
esp32: Clean up mpconfigport.h, enable more features.
esp32: Use new reset function.
esp32: Update to latest ESP IDF.
esp32: Add idf-version target to Makefile, to track IDF commit.
esp32: Initial port to ESP32.
This is a bit of a clumsy way of doing it but solves the issue of __init__
not running when a module is imported via its weak-link name. Ideally a
better solution would be found.
"Builtin" tinytest-based testsuite as employed by qemu-arm (and now
generalized by me to be reusable for other targets) performs simplified
detection of skipped tests, it treats as such tests which raised SystemExit
(instead of checking got "SKIP" output). Consequently, each "SKIP" must
be accompanied by SystemExit (and conversely, SystemExit should not be
used if test is not skipped, which so far seems to be true).
Before this patch, if a user defined the __new__() function for a class
then two instances of that class would be created: once before __new__ is
called and once during the __new__ call (assuming the user creates some
instance, eg using super().__new__, which is most of the time). The first
one was then discarded. This refactor makes it so that a new instance is
only created if the user __new__ function doesn't exist.
This patch cleans up and generalises part of the code which handles
overriding and calling a native base-class's __init__ method. It defers
the call to the native make_new() function until after the user (Python)
__init__() method has run. That user method now has the chance to call the
native __init__/make_new and pass it different arguments. If the user
doesn't call the super().__init__ method then it will be called
automatically after the user code finishes, to finalise construction of the
instance.
The nan-boxing representation has an extra 16-bits of space to store
small-int values, and making use of it allows to create and manipulate full
32-bit positive integers (ie up to 0xffffffff) without using the heap.
These tests involves testing allocation-free function calling, and in strict
stackless mode, it's not possible to make a function call with heap locked
(because function activation record aka frame is allocated on the heap).
In strict stackless mode, it's not possible to make a function call with
heap locked (because function activation record aka frame is allocated on
heap). So, if the only purpose of function is to introduce local variable
scope, move heap lock/unlock calls inside the function.
This patch introduces the MICROPY_ENABLE_PYSTACK option (disabled by
default) which enables a "Python stack" that allows to allocate and free
memory in a scoped, or Last-In-First-Out (LIFO) way, similar to alloca().
A new memory allocation API is introduced along with this Py-stack. It
includes both "local" and "nonlocal" LIFO allocation. Local allocation is
intended to be equivalent to using alloca(), whereby the same function must
free the memory. Nonlocal allocation is where another function may free
the memory, so long as it's still LIFO.
Follow-up patches will convert all uses of alloca() and VLA to the new
scoped allocation API. The old behaviour (using alloca()) will still be
available, but when MICROPY_ENABLE_PYSTACK is enabled then alloca() is no
longer required or used.
The benefits of enabling this option are (or will be once subsequent
patches are made to convert alloca()/VLA):
- Toolchains without alloca() can use this feature to obtain correct and
efficient scoped memory allocation (compared to using the heap instead
of alloca(), which is slower).
- Even if alloca() is available, enabling the Py-stack gives slightly more
efficient use of stack space when calling nested Python functions, due to
the way that compilers implement alloca().
- Enabling the Py-stack with the stackless mode allows for even more
efficient stack usage, as well as retaining high performance (because the
heap is no longer used to build and destroy stackless code states).
- With Py-stack and stackless enabled, Python-calling-Python is no longer
recursive in the C mp_execute_bytecode function.
The micropython.pystack_use() function is included to measure usage of the
Python stack.
This function was implemented as an experiment, and was enabled only in
unix port. To remind, it allows to access arbitrary files frozen as
source modules (vs bytecode).
However, further experimentation showed that the same functionality can
be implemented with frozen bytecode. The process requires more steps, but
with suitable toolset it doesn't matter patch. This process is:
1. Convert binary files into "Python resource module" with
tools/mpy_bin2res.py.
2. Freeze as the bytecode.
3. Use micropython-lib's pkg_resources.resource_stream() to access it.
In other words, the extra step is using tools/mpy_bin2res.py (because
there would be wrapper for uio.resource_stream() anyway).
Going frozen bytecode route allows more flexibility, and same/additional
efficiency:
1. Frozen source support can be disabled altogether for additional code
savings.
2. Resources could be also accessed as a buffer, not just as a stream.
There're few caveats too:
1. It wasn't actually profiled the overhead of storing a resource in
"Python resource module" vs storing it directly, but it's assumed that
overhead is small.
2. The "efficiency" claim above applies to the case when resource
file is frozen as the bytecode. If it's not, it actually will take a
lot of RAM on loading. But in this case, the resource file should not
be used (i.e. generated) in the first place, and micropython-lib's
pkg_resources.resource_stream() implementation has the appropriate
fallback to read the raw files instead. This still poses some distribution
issues, e.g. to deployable to baremetal ports (which almost certainly
would require freezeing as the bytecode), a distribution package should
include the resource module. But for non-freezing deployment, presense
of resource module will lead to memory inefficiency.
All the discussion above reminds why uio.resource_stream() was implemented
in the first place - to address some of the issues above. However, since
then, frozen bytecode approach seems to prevail, so, while there're still
some issues to address with it, this change is being made.
This change saves 488 bytes for the unix x86_64 port.
This target removes any stray files (i.e. something not committed to git)
from scripts/ and modules/ dirs (or whatever FROZEN_DIR and FROZEN_MPY_DIR
is set to).
The expected workflow is:
1. make clean-frozen
2. micropython -m upip -p modules <packages_to_freeze>
3. make
As it can be expected that people may drop random thing in those dirs which
they can miss later, the content is actually backed up before cleaning.
This is second part of fun_bc_call() vs mp_obj_fun_bc_prepare_codestate()
common code refactor. This factors out code to initialize codestate
object. After this patch, mp_obj_fun_bc_prepare_codestate() is effectively
DECODE_CODESTATE_SIZE() followed by allocation followed by
INIT_CODESTATE(), and fun_bc_call() starts with that too.
fun_bc_call() starts with almost the same code as
mp_obj_fun_bc_prepare_codestate(), the only difference is a way to
allocate the codestate object (heap vs stack with heap fallback).
Still, would be nice to avoid code duplication to make further
refactoring easier.
So, this commit factors out the common code before the allocation -
decoding and calculating codestate size. It produces two values,
so structured as a macro which writes to 2 variables passed as
arguments.
Command-line argc and argv should be passed, and as we don't have them,
placeholders were passed, but incorrectly. As we don't have them, just
pass 0/NULL. Looking at the source, this migh lead to problems under
Windows, but this test doesn't run under Windows.
Also, use "%d" printf format consistently with the rest of the codebase.
The assembler back-end for most architectures needs to know if a jump is
backwards in order to emit optimised machine code, and they do this by
checking if the destination label has been set or not. So always reset
label offsets to -1 (this reverts partially the previous commit, with some
minor optimisation for the if-logic with the pass variable).
Clearing the labels to -1 is purely a debugging measure. For release
builds there is no need to do it as the label offset table should always
have the correct value assigned.
Accessing them will crash immediately instead still working for some time,
until overwritten by some other data, leading to much less deterministic
crashes.
This is mostly a workaround for forceful rebuilding of mpy-cross on every
codebase change. If this file has debug logging enabled (by patching),
mpy-cross build failed.
Before that, the output was truncated to 32 bits. Only "%x" format is
handled, because a typical use is for addresses.
This refactor actually decreased x86_64 code size by 30 bytes.
This allows the function to raise an exception when unknown keyword args
are passed in. This patch also reduces code size by (in bytes):
bare-arm: -24
minimal x86: -76
unix x64: -56
unix nanbox: -84
stm32: -40
esp8266: -68
cc3200: -48
Furthermore, this patch adds space (" ") to the set of ROM qstrs which
means it doesn't need to be put in RAM if it's ever used.
Return the result of called function. If exception happened, return
MP_OBJ_NULL. Allows to use mp_call_function_*_protected() with callbacks
returning values, etc.
The parameter order in the example for ticks_diff was incorrect. If it's
"too early" that means that scheduled time is greater than current time and
if it's "running late" then scheduled time would be less than current time.
This commit essentially reverts aa9dbb1b03
where this if-condition was added. It seems that even when that commit
was made the code was never reached by any tests, nor reachable by
analysis (see below). The same is true with the code as it currently
stands: no test triggers this if-condition, nor any uasyncio examples.
Analysing the flow of the program also shows that it's not reachable:
==START==
-> to trigger this if condition mp_execute_bytecode() must return
MP_VM_RETURN_YIELD with *sp==MP_OBJ_STOP_ITERATION
-> mp_execute_bytecode() can only return MP_VM_RETURN_YIELD from the
MP_BC_YIELD_VALUE bytecode, which can happen in 2 ways:
-> 1) from a "yield <x>" in bytecode, but <x> must always be a proper
object, never MP_OBJ_STOP_ITERATION; ==END1==
-> 2) via yield from, via mp_resume() which must return
MP_VM_RETURN_YIELD with ret_value==MP_OBJ_STOP_ITERATION, which
can happen in 3 ways:
-> 1) it delegates to mp_obj_gen_resume(); go back to ==START==
-> 2) it returns MP_VM_RETURN_YIELD directly but with a guard that
ret_val!=MP_OBJ_STOP_ITERATION; ==END2==
-> 3) it returns MP_VM_RETURN_YIELD with ret_val set from
mp_call_method_n_kw(), but mp_call_method_n_kw() must return a
proper object, never MP_OBJ_STOP_ITERATION; ==END3==
The above shows there is no way to trigger the if-condition and it can be
removed.
Prior to this fix, enabling WebREPL for the first time via webrepl_setup
did not work at all because "boot.py" did not contain any lines with
"webrepl" in them that could be uncommented.
These checks are assumed to be true in all cases where gc_realloc is
called with a valid pointer, so no need to waste code space and time
checking them in a non-debug build.
So long as the input qstr identifier is valid (below the maximum number of
qstrs) the function will always return a valid pointer. This patch
eliminates the "return 0" dead-code.
This time hopefully should work reliably, using make $(wildcard) function,
which in this case either expands to existing prj_$(BOARD).conf file, or to
an empty string for non-existing one.
This patch improves parsing of floating point numbers by converting all the
digits (integer and fractional) together into a number 1 or greater, and
then applying the correct power of 10 at the very end. In particular the
multiple "multiply by 0.1" operations to build a fraction are now combined
together and applied at the same time as the exponent, at the very end.
This helps to retain precision during parsing of floats, and also includes
a check that the number doesn't overflow during the parsing. One benefit
is that a float will have the same value no matter where the decimal point
is located, eg 1.23 == 123e-2.
Put offset first in OR expressions, and use "offset" var instead of
hardcoded numbers. Hopefully, this will make it more self-describing
and show patterns better.
Dramatically improves TCP sending throughput because without an explicit
call to tcp_output() the data is only sent to the lower layers via the
lwIP slow timer which (by default) ticks every 500ms.
Before this patch MP_BINARY_OP_IN had two meanings: coming from bytecode it
meant that the args needed to be swapped, but coming from within the
runtime meant that the args were already in the correct order. This lead
to some confusion in the code and comments stating how args were reversed.
It also lead to 2 bugs: 1) containment for a subclass of a native type
didn't work; 2) the expression "{True} in True" would illegally succeed and
return True. In both of these cases it was because the args to
MP_BINARY_OP_IN ended up being reversed twice.
To fix these things this patch introduces MP_BINARY_OP_CONTAINS which
corresponds exactly to the __contains__ special method, and this is the
operator that built-in types should implement. MP_BINARY_OP_IN is now only
emitted by the compiler and is converted to MP_BINARY_OP_CONTAINS by
swapping the arguments.
In mp_binary_op, there is no need to explicitly check for type->getiter
being non-null and raising an exception because this is handled exactly by
mp_getiter(). So just call the latter unconditionally.
This patch introduces a new compile-time config option to disable multiple
inheritance at the Python level: MICROPY_MULTIPLE_INHERITANCE. It is
enabled by default.
Disabling multiple inheritance eliminates a lot of recursion in the call
graph (which is important for some embedded systems), and can be used to
reduce code size for ports that are really constrained (by around 200 bytes
for Thumb2 archs).
With multiple inheritance disabled all tests in the test-suite pass except
those that explicitly test for multiple inheritance.
This is a low-cost evaluation kit board from ST based on the STM32
Nucleo-144 form factor. It uses the STM32F746ZG MCU in the LQFP144
package. The MCU has 1MB of flash and 320kB of System RAM.
Cortex-M7 runs at up to 216MHz.
It's possible to use the methods (eg ilistdir) of a VFS FatFS object
without it being mounted in the VFS itself. This previously worked but
only because FatFS was "mounting" the filesystem automatically when any
function (eg f_opendir) was called. But it didn't work for ports that used
synchronisation objects (_FS_REENTRANT) because they are only initialised
via a call to f_mount. So, call f_mount explicitly when creating a new
FatFS object so that everything is set up correctly. Then also provide a
finaliser to do the f_umount call, but only if synchronisation objects are
enabled (since otherwise the f_umount call does nothing).
The function mp_obj_new_str_of_type is a general str object constructor
used in many places in the code to create either a str or bytes object.
When creating a str it should first check if the string data already exists
as an interned qstr, and if so then return the qstr object. This patch
makes the function have such behaviour, which helps to reduce heap usage by
reusing existing interned data where possible.
The old behaviour of mp_obj_new_str_of_type (which didn't check for
existing interned data) is made available through the function
mp_obj_new_str_copy, but should only be used in very special cases.
One consequence of this patch is that the following expression is now True:
'abc' is ' abc '.split()[0]
This patch simplifies the str creation API to favour the common case of
creating a str object that is not forced to be interned. To force
interning of a new str the new mp_obj_new_str_via_qstr function is added,
and should only be used if warranted.
Apart from simplifying the mp_obj_new_str function (and making it have the
same signature as mp_obj_new_bytes), this patch also reduces code size by a
bit (-16 bytes for bare-arm and roughly -40 bytes on the bare-metal archs).
Rationale:
* Calling Python build tool scripts from makefiles should be done
consistently using `python </path/to/script>`, instead of relying on the
correct she-bang line in the script [1] and the executable bit on the
script being set. This is more platform-independent.
* The name/path of the Python executable should always be used via the
makefile variable `PYTHON` set in `py/mkenv.mk`. This way it can be
easily overwritten by the user with `make PYTHON=/path/to/my/python`.
* The Python executable name should be part of the value of the makefile
variable, which stands for the build tool command (e.g. `MAKE_FROZEN` and
`MPY_TOOL`), not part of the command line where it is used. If a Python
tool is substituted by another (non-python) program, no change to the
Makefiles is necessary, except in `py/mkenv.mk`.
* This also solves #3369 and #1616.
[1] There are systems, where even the assumption that `/usr/bin/env` always
exists, doesn't hold true, for example on Android (where otherwise the unix
port compiles perfectly well).
All the asm macro names that convert a particular architecture to a generic
interface now follow the convention whereby the "destination" (usually a
register) is specified first.
Recent vendor SDKs ship libs with code in .text section, which previously
was going into .irom0.text. Adjust the linker script to route these
sections back to iROM (follows upstream change).
The SHA1 hashing functionality is provided via the "axtls" library's
implementation, and hence is unavailable when the "axtls" library is not being
used. This change provides the same SHA1 hashing functionality when using the
"mbedtls" library by using its implementation instead.
Macros to convert big-endian values to host byte order and vice-versa.
These were defined in adhoc way for some ports (e.g. esp8266), allow
reuse, provide default implementations, while allow ports to override.
In the vendor SDK 2.1.0, some of the functions which previously didn't
have prototypes, finally acquired them. Change prototypes on our side
to match those in vendor headers, to avoid warnings-as-errors.
If SSL_EAGAIN is returned (which is a feature of MicroPython's axTLS fork),
return EAGAIN.
Original axTLS returns SSL_OK both when there's no data to return to user
yet and when the underlying stream returns EAGAIN. That's not distinctive
enough, for example, original module code works well for blocking stream,
but will infinite-loop for non-blocking socket with EAGAIN. But if we fix
non-blocking case, blocking calls to .read() will return few None's initially
(while axTLS progresses thru handshake).
Using SSL_EAGAIN allows to fix non-blocking case without regressing the
blocking one.
Note that this only handles case of non-blocking reads of application data.
Initial handshake and writes still don't support non-blocking mode and must
be done in the blocking way.
The technique of using alloca is how dotted import names are composed in
mp_import_from and mp_builtin___import__, so use the same technique in the
compiler. This puts less pressure on the heap (only the stack is used if
the qstr already exists, and if it doesn't exist then the standard qstr
block memory is used for the new qstr rather than a separate chunk of the
heap) and reduces overall code size.
This reverts commit 3289b9b7a7.
The commit broke building on MINGW because the filename became
micropython.exe.exe. A proper solution to support more Windows build
environments requires more thought and testing.
Per the comment found here
https://github.com/micropython/micropython-esp32/issues/209#issuecomment-339855157,
this patch adds finaliser code to prevent memory leaks from ussl objects,
which is especially useful when memory for a ussl context is allocated
outside the uPy heap. This patch is in-line with the finaliser code found
in many modsocket implementations for various ports.
This feature is configured via MICROPY_PY_USSL_FINALISER and is disabled by
default because there may be issues using it when the ussl state *is*
allocated on the uPy heap, rather than externally.
With inplace methods now disabled by default, it makes sense to enable
reverse methods, as they allow for more useful features, e.g. allow
for datetime module to implement both 2 * HOUR and HOUR * 2 (where
HOUR is e.g. timedelta object).
This allows to configure support for inplace special methods separately,
similar to "normal" and reverse special methods. This is useful, because
inplace methods are "the most optional" ones, for example, if inplace
methods aren't defined, the operation will be executed using normal
methods instead.
As a caveat, __iadd__ and __isub__ are implemented even if
MICROPY_PY_ALL_INPLACE_SPECIAL_METHODS isn't defined. This is similar
to the state of affairs before binary operations refactor, and allows
to run existing tests even if MICROPY_PY_ALL_INPLACE_SPECIAL_METHODS
isn't defined.
If MICROPY_PY_ALL_SPECIAL_METHODS is defined, actually define all special
methods (still subject to gating by e.g. MICROPY_PY_REVERSE_SPECIAL_METHODS).
This adds quite a number of qstr's, so should be used sparingly.
VLAs can be expensive on stack usage due to stack alignment requirements,
and also the fact that extra local variables are needed to track the
dynamic size of the stack. So using fixed-size arrays when possible can
help to reduce code size and stack usage.
In this particular case, the maximum value of n_args in the VLA is 2 and so
it's more efficient to just allocate this array with a fixed size. This
reduces code size by around 30 bytes on Thumb2 and Xtensa archs. It also
reduces total stack usage of the function: on Thumb2 the usage with VLA is
between 40 and 48 bytes, which is reduced to 32; on Xtensa, VLA usage is
between 64 and 80 bytes, reduced to 32; on x86-64 it's at least 88 bytes
reduced to 80.
CPython only supports the server_hostname keyword arg via the SSLContext
object, so use that instead of the top-level ssl.wrap_socket. This allows
the test to run on CPython the same as uPy.
Also add the "Host:" header to correctly make a GET request (for URLs that
are hosted on other servers). This is not strictly needed to test the SSL
connection but helps to debug things when printing the response.
Unix naming is historical, before current conventions were established.
All other ports however have it as "modusocket.c", so rename for
consistency and to avoid confusion.
Update makeqstrdata.py to sort strings starting with "__" to the beginning
of qstr list, so they get low qstr id's, guaranteedly fitting in 8 bits.
Then use this property to further compact op_id => qstr mapping arrays.
Per https://docs.python.org/3/library/sys.html#sys.getsizeof:
getsizeof() calls the object’s __sizeof__ method. Previously, "getsizeof"
was used mostly to save on new qstr, as we don't really support calling
this method on arbitrary objects (so it was used only for reporting).
However, normalize it all now.
Not all compilers/analysers are smart enough to realise that this function
is never called if MICROPY_ERROR_REPORTING is not TERSE, because the logic
in the code uses if statements rather than #if to select whether to call
this function or not (MSC in debug mode is an example of this, but there
are others). So just unconditionally compile this helper function. The
code-base anyway relies on the linker to remove unused functions.
The legacy function pyb.repl_uart() is still provided and retains its
original behaviour (it only accepts a UART object). uos.dupterm() will now
accept any object with write/readinto methods. At the moment there is just
1 dupterm slot.
Without this the board will crash when deactivating a stream that doesn't
have a close() method (eg UART) or that raises an exception within the
method (eg user-defined function).
The W5200 and W5500 can support up to 80MHz so 42MHz (the maximum the
pyboard can do in its standard configuration) should be safe.
Tested to give around 1050000 kbytes/sec TCP download speed on a W5500,
which is about 10% more than with the previous SPI speed of 21MHz.
Which Wiznet chip to use is a compile-time option: MICROPY_PY_WIZNET5K
should be set to either 5200 or 5500 to support either one of these
Ethernet chips. The driver is called network.WIZNET5K in both cases.
Note that this commit introduces a breaking-change at the build level
because previously the valid values for MICROPY_PY_WIZNET5K were 0 and 1
but now they are 0, 5200 and 5500.
This patch implements the basic SPI read/write functions for the W5500
chip. It also allows _WIZCHIP_ to be configured externally to select the
specific Wiznet chip.
The uos.dupterm() signature and behaviour is updated to reflect the latest
enhancements in the docs. It has minor backwards incompatibility in that
it no longer accepts zero arguments.
The dupterm_rx helper function is moved from esp8266 to extmod and
generalised to support multiple dupterm slots.
A port can specify multiple slots by defining the MICROPY_PY_OS_DUPTERM
config macro to an integer, being the number of slots it wants to have;
0 means to disable the dupterm feature altogether.
The unix and esp8266 ports are updated to work with the new interface and
are otherwise unchanged with respect to functionality.
So that a pointer to it can be passed as a pointer to math_generic_1. This
patch also makes the function work for single and double precision floating
point.
This patch changes how most of the plain math functions are implemented:
there are now two generic math wrapper functions that take a pointer to a
math function (like sin, cos) and perform the necessary conversion to and
from MicroPython types. This helps to reduce code size. The generic
functions can also check for math domain errors in a generic way, by
testing if the result is NaN or infinity combined with finite inputs.
The result is that, with this patch, all math functions now have full
domain error checking (even gamma and lgamma) and code size has decreased
for most ports. Code size changes in bytes for those with the math module
are:
unix x64: -432
unix nanbox: -792
stm32: -88
esp8266: +12
Tests are also added to check domain errors are handled correctly.
While this console API improves handling on real hardware boards
(e.g. clipboard paste is much more reliable, as well as programmatic
communication), it vice-versa poses problems under QEMU, apparently
because it doesn't emulate UART interrupt handling faithfully. That
leads to inability to run the testsuite on QEMU at all. To work that
around, we have to suuport both old and new console routines, and use
the old ones under QEMU.
We want to close communication object even if there were exceptions
somewhere in the code. This is important for --device exec:/execpty:
which may otherwise leave processing running in the background.
Ideally, these should be configurable from Python (using network module),
but as that doesn't exist, we better off using Zephyr's native bootstrap
configuration facility.
The poweroff() and poweron() methods are used to do soft power control of
the display, and this patch makes these methods work the same for both I2C
and SPI interfaces.
Printing "(null)" when a NULL string pointer is passed to %s is a debugging
feature and not a feature that's relied upon by the code. So it only needs
to be compiled in when debugging (such as assert) is enabled, and saves
roughy 30 bytes of code when disabled.
This patch also fixes this NULL check to not do the check if the precision
is specified as zero.
Header files that are considered internal to the py core and should not
normally be included directly are:
py/nlr.h - internal nlr configuration and declarations
py/bc0.h - contains bytecode macro definitions
py/runtime0.h - contains basic runtime enums
Instead, the top-level header files to include are one of:
py/obj.h - includes runtime0.h and defines everything to use the
mp_obj_t type
py/runtime.h - includes mpstate.h and hence nlr.h, obj.h, runtime0.h,
and defines everything to use the general runtime support functions
Additional, specific headers (eg py/objlist.h) can be included if needed.
Qstr values fit in 16-bits (and this fact is used elsewhere in the code) so
no need to use more than that for the large lookup tables. The compiler
will anyway give a warning if the qstr values don't fit in 16 bits. Saves
around 80 bytes of code space for Thumb2 archs.
Building mpy-cross: this patch adds .exe to the PROG name when building
executables for host (eg mpy-cross) on Windows. make clean now removes
mpy-cross.exe under Windows.
Building MicroPython: this patch sets MPY_CROSS to mpy-cross.exe or
mpy-cross so they can coexist and use cygwin or WSL without rebuilding
mpy-cross. The dependency in the mpy rule now uses mpy-cross.exe for
Windows and mpy-cross for Linux.
CPython docs explicitly state that the RHS of a set/frozenset binary op
must be a set to prevent user errors. It also preserves commutativity of
the ops, eg: "abc" & set() is a TypeError, and so should be set() & "abc".
This change actually decreases unix (x64) code by 160 bytes; it increases
stm32 by 4 bytes and esp8266 by 28 bytes (but previous patch already
introduced a much large saving).
A lot of set's methods (the mutable ones) are not allowed to operate on a
frozenset, and giving frozenset a separate locals dict with only the
methods that it supports allows to simplify the logic that verifies if
args are a set or a frozenset. Even though the new frozenset locals dict
is relatively large (88 bytes on 32-bit archs) there is a much bigger
saving coming from the removal of a const string for an error message,
along with the removal of some checks for set or frozenset type.
Changes in code size due to this patch are (for ports that changed at all):
unix x64: -56
unix nanbox: -304
stm32: -64
esp8266: -124
cc3200: -40
Apart from the reduced code, frozenset now has better tab-completion
because it only lists the valid methods. And the error message for
accessing an invalid method is now more detailed (it includes the
method name that wasn't found).
This returns a complex number, following CPython behaviour. For ports that
don't have complex numbers enabled this will raise a ValueError which gives
a fail-safe for scripts that were written assuming complex numbers exist.
This adds a new configuration option to print runtime warnings and errors to
stderr. On Unix, CPython prints warnings and unhandled exceptions to stderr,
so the unix port here is configured to use this option.
The unix port already printed unhandled exceptions on the main thread to
stderr. This patch fixes unhandled exceptions on other threads and warnings
(issue #2838) not printing on stderr.
Additionally, a couple tests needed to be fixed to handle this new behavior.
This is done by also capturing stderr when running tests.
It removes the need for a wrapper Python function to dispatch to the
framebuf method which makes each function call a bit faster, roughly 2.5x.
This patch also adds the rest of the framebuf methods to the SSD class.
The timer prescaler is buffered by default, and this patch enables ARPE
which buffers the auto-reload register. With both of these registers
buffered it's now possible to smoothly change the timer's frequency and
have a smoothly varying PWM output.
Allow literal minus in char classes to be in trailing position, e.g. [a-c-].
(Previously, minus was allowed only at the start.)
This increases ARM Thumb2 code size by 8 bytes.
Current users of fixed vstr buffers (building file paths) assume that there
is no overflow and do not check for overflow after building the vstr. This
has the potential to lead to NULL pointer dereferences
(when vstr_null_terminated_str returns NULL because it can't allocate RAM
for the terminating byte) and stat'ing and loading invalid path names (due
to the path being truncated). The safest and simplest thing to do in these
cases is just raise an exception if a write goes beyond the end of a fixed
vstr buffer, which is what this patch does. It also simplifies the vstr
code.
The vstr argument to the calls to vstr_add_len are dynamically allocated
(ie fixed_buf=false) and so vstr_add_len will never return NULL. So
there's no need to check for it. Any out-of-memory errors are raised by
the call to m_renew in vstr_ensure_extra.
The aim of this patch is to rewrite the functions that create exception
instances (mp_obj_exception_make_new and mp_obj_new_exception_msg_varg) so
that they do not call any functions that may raise an exception. Otherwise
it's possible to create infinite recursion with an exception being raised
while trying to create an exception object.
The two main things that are done to accomplish this are:
1. Change mp_obj_new_exception_msg_varg to just format the string, then
call mp_obj_exception_make_new to actually create the exception object.
2. In mp_obj_exception_make_new and mp_obj_new_exception_msg_varg try to
allocate all memory first using functions that don't raise exceptions
If any of the memory allocations fail (return NULL) then degrade
gracefully by trying other options for memory allocation, eg using the
emergency exception buffer.
3. Use a custom printer backend to conservatively format strings: if it
can't allocate memory then it just truncates the string.
As part of this rewrite, raising an exception without a message, like
KeyError(123), will now use the emergency buffer to store the arg and
traceback data if there is no heap memory available.
Memory use with this patch is unchanged. Code size is increased by:
bare-arm: +136
minimal x86: +124
unix x64: +72
unix nanbox: +96
stm32: +88
esp8266: +92
cc3200: +80
This allows user classes to implement __abs__ special method, and saves
code size (104 bytes for x86_64), even though during refactor, an issue
was fixed and few optimizations were made:
* abs() of minimum (negative) small int value is calculated properly.
* objint_longlong and objint_mpz avoid allocating new object is the
argument is already non-negative.
Prior to this patch calling pyb.Timer(id) would always create a new timer
instance, even if there was an existing one. This patch fixes this
behaviour to match other peripherals, like UART, such that constructing a
timer with just the id will retrieve any existing instances.
The patch also refactors the way timers are validated on construction to
simplify and reduce code size.
This patch also removes the empty type "pinbase_type" (which crashes if
accessed) and uses "machine_pinbase_type" instead as the type of the
PinBase singleton.
Given that various ports now require submodules, rewrite the section
to be more generic.
Also, add git submodule update command to other sections for easy user
start.
If, for class X, X.__add__(Y) doesn't exist (or returns NotImplemented),
try Y.__radd__(X) instead.
This patch could be simpler, but requires undoing operand swap and
operation switch to get non-confusing error message in case __radd__
doesn't exist.
connect, send, recv, sendto and recvfrom now release the GIL. accept
already releases the GIL because it calls mp_hal_delay_ms() within its
busy-wait loop.
Previous to this patch the i2c.scan() method would do up to 100 probes per
I2C address, to detect the devices on the bus. This repeated probing was a
relic from when the code was copied from the accelerometer initialisation,
which requires to do repeated probes while waiting for the accelerometer
chip to turn on.
But I2C devices shouldn't need more than 1 probe to detect their presence,
and the generic software I2C implementation uses 1 probe successfully. So
this patch changes the implementation to use 1 probe per address, which
significantly speeds up the scan operation.
This is to allow to place reverse ops immediately after normal ops, so
they can be tested as one range (which is optimization for reverse ops
introduction in the next patch).
Originally, there were grouped in blocks of 5, to make it easier e.g.
to assess and numeric code of each. But now it makes more sense to
group it by semantics/properties, and then split in chunks still,
which usually leads to chunks of ~6 ops.
It starts a dichotomy of mp_binary_op_t values which can't appear in the
bytecode. Another reason to move it is to VALUES of OP_* and OP_INPLACE_*
nicely adjacent. This also will be needed for OP_REVERSE_*, to be soon
introduced.
With MBEDTLS_DEBUG_C disabled the function mbedtls_debug_set_threshold()
doesn't exist. There's also no need to call mbedtls_ssl_conf_dbg() so a
few bytes can be saved on disabling that and not needing the mbedtls_debug
callback.
This patch adds a function utf8_check() to check for a valid UTF-8 encoded
string, and calls it when constructing a str from raw bytes. The feature
is selectable at compile time via MICROPY_PY_BUILTINS_STR_UNICODE_CHECK and
is enabled if unicode is enabled. It costs about 110 bytes on Thumb-2, 150
bytes on Xtensa and 170 bytes on x86-64.
This is to keep the top-level directory clean, to make it clear what is
core and what is a port, and to allow the repository to grow with new ports
in a sustainable way.
There are 2 changes:
- remove early initialisation of LSE and replaced it by LSEDRIVE config
(there is no reason to call HAL_RCC_OscConfig twice).
- add initialisation of the variables PLLSAI1Source and PLLSAI1M as they
are needed in Cube HAL 1.8.1.
IEEE floating point is specified such that a comparison of NaN with itself
returns false, and Python respects these semantics. This patch makes uPy
also have these semantics. The fix has a minor impact on the speed of the
object-equality fast-path, but that seems to be unavoidable and it's much
more important to have correct behaviour (especially in this case where
the wrong answer for nan==nan is silently returned).
These are now returned as "operation not supported" instead of raising
TypeError. In particular, this fixes equality for float vs incompatible
types, which now properly results in False instead of exception. This
also paves the road to support reverse operation (e.g. __radd__) with
float objects.
This is achieved by introducing mp_obj_get_float_maybe(), similar to
existing mp_obj_get_int_maybe().
Prior to this patch, the size of the buffer given to pack_into() was checked
for being too small by using the count of the arguments, not their actual
size. For example, a format spec of '4I' would only check that there was 4
bytes available, not 16; and 'I' would check for 1 byte, not 4.
The pack() function is ok because its buffer is created to be exactly the
correct size.
The fix in this patch calculates the total size of the format spec at the
start of pack_into() and verifies that the buffer is large enough. This
adds some computational overhead, to iterate through the whole format spec.
The alternative is to check during the packing, but that requires extra
code to handle alignment, and the check is anyway not needed for pack().
So to maintain minimal code size the check is done using struct_calcsize.
Prior to this patch, the size of the buffer given to unpack/unpack_from was
checked for being too small by using the count of the arguments, not their
actual size. For example, a format spec of '4I' would only check that
there was 4 bytes available, not 16; and 'I' would check for 1 byte, not 4.
This bug is fixed in this patch by calculating the total size of the format
spec at the start of the unpacking function. This function anyway needs to
calculate the number of items at the start, so calculating the total size
can be done at the same time.
This patch makes a repeat counter behave the same as repeating the
typecode, when there are not enough args. For example:
struct.pack('2I', 1) now behave the same as struct.pack('II', 1).
Similar to the existing testcase, but test that returning both value of
native type and instance of another user class from __new__ lead to
__init__ not being called, for better coverage.
NotImplemented means "try other fallbacks (like calling __rop__
instead of __op__) and if nothing works, raise TypeError". As
MicroPython doesn't implement any fallbacks, signal to raise
TypeError right away.
This upgrades the HAL to the versions:
- F4 V1.16.0
- F7 V1.7.0
- L4 V1.8.1
The main changes were in the SD card driver. The vendor changed the SD
read/write functions to accept block number intead of byte address, so
there is no longer any need for a custom patch for this in stm32lib.
The CardType values also changed, so pyb.SDCard().info() will return
different values for the 3rd element of the tuple, but this function was
never documented.
The unary-op/binary-op enums are already defined, and there are no
arithmetic tricks used with these types, so it makes sense to use the
correct enum type for arguments that take these values. It also reduces
code size quite a bit for nan-boxing builds.
The SPI flash driver now supports using an arbitrary SPI object to
communicate with the flash chip, and in particular can use a hardware SPI
peripheral.
Otherwise, it will silently get incorrect result on other values types,
including CPython tuple form like "foo.png".endswith(("png", "jpg"))
(which MicroPython doesn't support for unbloatedness).
Allows for simpler, smaller and faster code at run time when selecting the
boards frequency, and allows more customisation opportunities for the PLL
values depending on the target MCU.
Changes for F7 are:
- machine.reset_cause() now reports DEEPSLEEP_RESET correctly;
- machine.sleep() is further optimised to reduce power consumption;
- machine.deepsleep() is now implemented and working.
..class:: FrameBuffer(buffer, width, height, format, stride=width)
..class:: FrameBuffer(buffer, width, height, format, stride=width, /)
Construct a FrameBuffer object. The parameters are:
@@ -38,9 +38,9 @@ Constructors
-*width* is the width of the FrameBuffer in pixels
-*height* is the height of the FrameBuffer in pixels
-*format* specifies the type of pixel used in the FrameBuffer;
valid values are ``framebuf.MVLSB``, ``framebuf.RGB565``
and ``framebuf.GS4_HMSB``. MVLSB is monochrome 1-bit color,
RGB565 is RGB 16-bit color, and GS4_HMSB is grayscale 4-bit color.
permissible values are listed under Constants below. These set the
number of bits used to encode a color value and the layout of these
bits in *buffer*.
Where a color value c is passed to a method, c is a small integer
with an encoding that is dependent on the format of the FrameBuffer.
-*stride* is the number of pixels between each horizontal line
@@ -110,8 +110,9 @@ Other methods
corresponding color will be considered transparent: all pixels with that
color value will not be drawn.
This method works between FrameBuffer's utilising different formats, but the
resulting colors may be unexpected due to the mismatch in color formats.
This method works between FrameBuffer instances utilising different formats,
but the resulting colors may be unexpected due to the mismatch in color
formats.
Constants
---------
@@ -129,7 +130,7 @@ Constants
Monochrome (1-bit) color format
This defines a mapping where the bits in a byte are horizontally mapped.
Each byte occupies 8 horizontal pixels with bit 0 being the leftmost.
Each byte occupies 8 horizontal pixels with bit 7 being the leftmost.
Subsequent bytes appear at successive horizontal locations until the
rightmost edge is reached. Further bytes are rendered on the next row, one
pixel lower.
@@ -138,7 +139,7 @@ Constants
Monochrome (1-bit) color format
This defines a mapping where the bits in a byte are horizontally mapped.
Each byte occupies 8 horizontal pixels with bit 7 being the leftmost.
Each byte occupies 8 horizontal pixels with bit 0 being the leftmost.
Subsequent bytes appear at successive horizontal locations until the
rightmost edge is reached. Further bytes are rendered on the next row, one
pixel lower.
@@ -147,6 +148,14 @@ Constants
Red Green Blue (16-bit, 5+6+5) color format
..data:: framebuf.GS2_HMSB
Grayscale (2-bit) color format
..data:: framebuf.GS4_HMSB
Grayscale (4-bit) color format
..data:: framebuf.GS8
Grayscale (8-bit) color format
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.