GSoC23: Project Status updateAyush Singh July 08, 2023 #c #gsoc23 #zephyr #linux
Hello everyone. It will soon be time for the Mid-term evaluation of Google Summer of Code 2023. Thus, I decided to write a post to summarise everything I have been working on. I will also go over how it can be replicated by anyone interested.
This project has two main parts
- BeaglePlay CC1352 Application
- BeaglePlay Linux Driver
CC1352 and Linux driver communicate over UART using High-Level Data Link Protocol (HDLC). The BeaglePlay CC1352 application also communicates with the BeagleConnect node running greybus-for-zephyr. I have left greybus-for-zephyr largely unmodified. The changes in my fork were to make it run with the latest Zephyr.
I will now review the current functionality with the working logs, which should be reproducible.
BeaglePlay Linux Driver
The beagleplay Linux driver is responsible for HDLC communication over UART. I am using three hdlc addresses:
- DBG: For logs from BeaglePlay CC1352. These frames are only ever received.
- Greybus: For greybus payload. All greybus related communication should use this. Currently does not do much.
- MCUmgr: For MCUmgr tty communication. These frames are sent when data is written to
ttyMCU0. Any MCUmgr frame sent from CC1352 is also streamed to the
The Linux driver complies against v5.10.168-ti-arm64-r103, which is running on BeaglePlay.
The UART transmission is done using a workqueue. Producers write the data (HDLC block) in a circular buffer. The workqueue handler then reads the circular buffer to send the block over HDLC.
The UART receiving is done using serdev client ops. The data is buffered until a complete HDLC frame is received. Once the frame is complete, it is handled appropriately depending on the frame address field. Finally, the receive buffer is cleared to prepare for a new frame.
Any data written to
ttyMCU0 is buffered until
\x0a is encountered. Then the data is queued to be sent as a single MCUmgr frame. This part works somewhat since my Zephyr application successfully processes the MCUmgr fragment. However, the MCUmgr side of things is still a work in progress.
If an MCUmgr HDLC frame is received, the data is directly streamed to
ttyMCU0. This is untested since I have not yet received a response from BeaglePlay CC1352 MCUmgr.
BeaglePlay CC1352 Application
The BeaglePlay CC1352 Application currently has the following responsibilities:
- Handle HDLC UART Communication.
- Discover Any new nodes in the network.
- Maintain a table of all active nodes and their cports.
- Maintains a list of all in-flight greybus operations.
- Handle Greybus communication with Nodes over TCP sockets.
- Logging over HDLC UART with a custom backend.
- Cleanup finished greybus operations along will calling associated callbacks
HDLC Uart Communication
An interrupt is used to handle receiving data over UART. This data is then written to a ring buffer. The actual processing of this data is done in the system workqueue. The workqueue buffers the input data until a complete HDLC frame is received. Then the frame is processed depending on its address.
The writing of a block is also done asynchronously using the System workqueue. A First In First Out Queue of blocks to transmit is maintained. The workqueue handler sends all pending blocks when it is called.
A thread is constantly running that performs node discovery after a set interval (which is configurable). The Node IPv6 address is currently static since Zephyr DNS Resolver does not support DNS Discovery. However, it will be easy to add dynamic node discovery by modifying get_all_nodes function.
After querying for all greybus nodes, it checks against the active nodes table if the node is already present. In case the node is absent, it is added to the table, and the node is submitted for setup.
Node setup runs on the System Workqueue. Its job is to create a TCP socket with the specified cport (in this case, Cport0). It then adds the cport socket to the nodes table. Finally, in the case of CPort0, the GetManifestSize control request is sent to the node.
GetManifestSize response is successfully received, a GetManifest control request is sent using
gb_operations->callback. The response of this request is parsed to get all available Cports in the Node. All the cports other than Cport0 are then queued for setup.
The setup is similar to Cport0. However, in the case of these Cports, a simple Ping SVC request is sent.
Everything is logged to the Linux host using the custom HDLC logging backend.
The nodes table is maintained as a static array of node_table_items:
The size of the node_table array is configurable. The Cport0 or Control port is treated differently since a node without CPort0 is mostly cleaned up. A new node can be added with just the IPv6 address. However, it is essential to initialize the Cport0 (which is done on node discovery).
The Cports pointer is supposed to be dynamically initialized after parsing the greybus manifest. The current assumption is that cports will be initialized only once. Thus it does not implement copying to resized cports array yet.
The Cports can be removed by passing the socket. This also closes the socket. If CPort0 is removed, the whole node is also removed with all sockets closed.
It is also possible to remove a node by IPv6 address. It also closes any open sockets.
A Double-linked list of in-flight greybus operations is maintained. This is because the greybus operations can be sent/received out of order, making a regular queue unfit. The
gb_operaitions struct is as follows:
/* * Struct to represent a greybus operation. * * @param sock: socket to perform this operation on. * @param operation_id: the unique id for this operation. * @param request_sent: flag to check if the request has been sent. * @param request: pointer to greybus request message. * @param response: pointer to greybus response message. * @param callback: callback function called when operation is completed. * @param node: operation dlist node. */ ;
The sock parameter will probably be removed soon since I must also send requests to AP (Linux Driver) over UART.
The operation id is set using an atomic if the operation is not one-shot. For one-shot operations, the operation id is 0.
The request and response are not allocated when allocating the operation. Similarly, the operation is not queued unless
gb_operation_queue is called. This is important since the cleanup is the caller's responsibility until the operation is queued.
An optional callback can be provided, which is called in the System Workqueue. For one-shot requests, this is called once the request is sent, while for normal operations, it is called on receiving a response. The callback can essentially serve as a way to chain dependent greybus operations, or do something with the response, etc. Once the operation concludes, it is removed from the in-flight operations dlist and moved to the callback processing dlist. There is no reason to use a dlist for callbacks since they are called in a FIFO order, but the
sys_dnode_t is already present, so you might as well use it. Once the callback is complete, the operation is deallocated.
gb_message structure essentially stores a greybus message with a header and payload. It also contains a pointer to the
gb_operation, if there is any.
/* * Struct to represent greybus message * * @param operation: greybus operation this message is associated with. Can be * NULL in case of message received. * @param header: greybus msg header. * @param payload: heap allocated payload. * @param payload_size: size of payload in bytes */ ;
Communication with nodes
The communication with greybus nodes happens over TCP. Each Cport in the Node exposes a Port in incrementing order. We establish a connection to these CPorts from the node setup.
A Reader thread is continuously running, which checks for any sockets with pending data using
k_poll. If a valid greybus message is received, we first check if is associated with any in-flight greybus operations. If it is valid, we add the response to the operation, which then processes the callback. In the case of stand-alone messages, we do nothing since it is probably intended for the AP.
A Writer thread is also continuously running, checking if any socket is available for writing using
k_poll. If the socket can be written to, we check if any greybus operations are present for that socket. In case of a match, we send the message and mark the operation request as sent.
Logging over HDLC
I also wrote a custom logging backend that sends all logging data as an HDCL frame with a DBG address. This data is then printed to standard Linux logs. Currently, the logging backend does not do much message processing, like setting the log level, but it can be done in the future.
Here are all the components for this demo:
- greybus-for-zephyr: 57cf1c1b1ee3388d1ad9971ed77f548ae7abf63a
- Zephyr: 520fb22555402360e5eba798f6834771254198af
- beagleplay-greybus-driver: 40ed0fbc6cbe3c150d7eec74331f53a5e4fc351b
- cc1352-firmware: d68e300440affc502209b7fa8f39e57bc0476346
The instructions for building beagleplay-greybus-driver can be found in my linux driver post. The instructions for building cc1352-firmware are similar to my zephyr application post. It is more tricky to compile greybus-for-zephyr, but my fork (with some changes to the project config) should work.
For flashing, I am still using cc1352-flasher as shown in my previous post.
After installing the Linux driver, I used a simple Python script to reset cc1352 to get the full logs:
=14 = =
BeagleConnect Node Logs
) ) ) )
BeaglePlay Linux Logs
[ [ [ [ [ [) [ [ [ [ [ [ [) ) [ [ [ [ [) ) [ [ [ [ [ [ [ [ [ [ [ [ [) ) [ [ [ [ [ [ [ [ [ [ [) ) [) ) [ [ [ [ [ [ [) [ [ [ [ [) [ [ [ [ [ [ [) ) [ [ [) ) [ [
I hope this sheds light on the current status of my project. You can follow my GSoC23-related blog posts using this feed.
Consider supporting me if you like my work.
Here are the CI build artifacts for anyone wanting to test this out:
Back to top