MY18 Telemetry System

by Nick Stathas, MY18 Software Lead

Improving Design Using Data

In past years, MIT Motorsports race cars either did not feature a data acquisition package or were severely limited in data rate. With MY18, our latest vehicle, one of the primary goals was building a custom data pipeline capable of handling the amount of data produced by our car. Such data are crucial for both informing the design of our parts as well as validating our simulations.

Using an in-house embedded device dubbed the “logger”, we are able to measure and record vehicle metrics ranging from power draw to lateral acceleration. Through our collaboration with Kx systems, we managed to expand our sensor suite to allow for more thorough data that will improve our testing and tuning process. This effectively raises the team’s ability to maximize vehicle performance using a data-driven approach. Equations and simulations are the first step to successful design, but running actual experiments on the real vehicle.

Data Acquisition Architecture

Overview of MY18’s low voltage system

Physical Overview

The car is composed of several boards, most of which have microcontroller units on them. They interconnect using the CAN protocol running at data rates of 1 Mbaud on CAN0 and 500 kbaud on CAN1.

The majority of the boards are there to collect and relay vital information about the car’s state in real time. Some of those data points are used by the Electronic Control Unit and are factored in to the controls algorithms while the rest are used by the team’s engineers to monitor vehicle performance or debug issues. For the latter to be possible, the Logger ensures that ever CAN message from either CAN bus is logged on the on-board SD card. The Logger also features an XBee module capable of transmitting data over the 802.11.15 protocol at a rate of 115 kbaud. Due to the bandwidth limit of the radio module, there exists a filter that relays only certain CAN messages to a receiver ground station.

Data Format

CAN messages consist of an identifier and a data field that spans from 1 to 8 bytes. Several protocols exist for documenting the process of packing and unpacking useful information in those 8 to 64 bits. MIT Motorsports has developed ParseCAN, a YAML-based CAN specification format followed by an accompanying Python parser.

buses:
    can1:
        baudrate: 500000
        extended: false
        messages:
            CellVoltageRange:
                id: 0x017
                period: 31ms
                segments:
                    min:
                        position: 0
                        length: 14
                        type: <uint16
                        unit: mV
                    max:
                        position: 14
                        length: 14
                        type: <uint16
                        unit: mV
                    sum:
                        position: 28
                        length: 32
                        type: <uint32
                        unit: mV

Excerpt of the ParseCAN specification that describes MY18

Parsing Log Files

ParseCAN’s Python framework allows for parsing logs of raw CAN messages into vectors, each set of which is accompanied by its own time series. Since the specification itself forms a tree, so does a parsed log file. For every CAN bus it contains one dictionary that maps to NumPy structured arrays containing the actual data.

This data structure is produced by a sequence of CAN messages that are parsed individually. A parsed message looks like the following.

{‘can0’:
  {‘EKF_Position’:
    {‘time’: 113.1, ‘lat’: 423622, ‘lon’: -710975}
  }
}

Historic vs. Streaming Data

Since the entire data processing pipeline described here is atomic on the message level, there is no fundamental difference between a log file and a stream of raw CAN messages other than the latter being real-time. This allows us to use the exact same tools that process historical data to process streaming data.

Development Process in kdb+

Knowing this last fact about the similarities of historical and streaming data tools, we decided to develop the real time processor in kdb+ using log files simulating a stream at first. Only once the code written in Q was functional did we pipe a live stream into the process. kdb+ facilitated the process of simulating a stream from .csv files with only a few lines of code.

Tickerplant Design

A tickerplant architecture was chosen for handling the incoming stream and forwarding it to subscribers. In this case subscribers could range from a Kx Dashboards instance to multiple kdb+ instances that run real time calculations on a subset of the incoming data. Another reason for which the tickerplant was chosen was the existence of documentation on building feed handlers, pieces of code not necessarily in Q that are responsible for formatting and feeding the data into the tickerplant. The freedom the tickerplant model provides as far as data entry is concerned was vital to supporting the aforementioned iterative development process.

Data Format

In an attempt to convert the aforementioned nested dictionary NumPy format in kdb+ we decided upon flattening the first level of hierarchy (can0, can1) and storing all data points for each message in a table.

For simplicity during the development phase we defined all fields other than time as floating point numbers. Proper type definitions were relegated to the feed handler.

Logging Scheme

The tickerplant init function takes the path to a log directory. It then sets d starting from 0 if no logs exist or counts up from the numerically largest name of a log file. Traditionally logs are named on an hourly or daily basis, but in our case the intent was to have a log per lap. Thus d here represents the number of the lap.

init: {[logpath]
        w::t!(count t::tables`.)#();
        d::max "I" $ string key logpath;
        if [d=-0Wi; d::0];
        new[logpath;1];
  }

The new function takes the log path and if b is truthy it will increment the lap counter. Then it will attempt to open a new log file with filename d within the log path.

new: {[logpath;b]
        if[not b; hclose l; d::d+1];
        L:: `$raze string logpath,d;
        $[not (`$string d) in key logpath;[L set ();i::0];i::-11!(-2;L)];
        l:: hopen L;
 
        {(neg first first x)"reinit[]"} each (distinct value .u.w);
        d
  }

The upd function is rather simplistic. f gets the columns of the table named t in the . namespace. g is a table with the same schema as t but containing only the new data found in x. g then gets published to all the subscribers and pushed to the log file.

upd: {[t;x]
    .lb.t: t;
    .lb.x: x;
    if [not count x; :()];
      f: cols getfromdot[t];
      g: $[0>type first x; enlist f!flip x; flip f!flip x];
      pub[t; g];
      if [l; l enlist (`upd; t; g); i+:1];
  }

Generic Subscriber

This subscriber connects to the tickerplant, applies the schema of each table to the current namespace by calling .u.sub, and replays the log. The update function simply upserts new entries into the tables defined upon subscribing.

\l tick/r.q
h: hopen 5010

upd: {[tabname;tabdata] tabname upsert tabdata}

.u.rep .(h:hopen `$":",.u.x 0)"(.u.sub[`;`];`.u `i`L)";

Fake Feed

To be able to test both the tickerplant, the analytics, and the dashboard features during development it was necessary to develop a feed handler that plays back historic data, which we have in .csv format.

getcsvs: {[dir]csvs:key dir;(`$-4_'string csvs) set' {((count csv vs first read0 x)#"f";enlist csv) 0: x} each`$raze each string dir,/: csvs}

path: `:C:/Users/nicks/Desktop/logs/;
tabs: getcsvs[path];

// Fix csv first row that begins with #
{x set `time xcol .u.getfromdot[x]} each tabs

beginTime:2018.06.23D12:42:22;
{update time:beginTime + `long$1000000000*time from x} each tabs;


// Initialize the tickerplant
logpath: `:logs;
.u.init[logpath];

t: beginTime;
step: 100000000;
fakeupdate:{{.u.upd[x;flip value flip select from x where (time>=t) & (time<t+step)]} each .u.t; t::t+step};

.z.ts: fakeupdate;
begin: {system "t ",string (step % 1000000)}

Feed Handler

The feed handler for the telemetry data was written in Python 3 using PyQ. Messages came through serial over XBee and were parsed using our in-house ParseCAN framework. After parsing, the incoming values were tagged with the appropriate table name symbol and passed to .u.upd. The bottleneck in the pipeline is the XBee transmission, which gave us the freedom to implement the feed handler in Python in only a few hours. With a faster reception medium it might become necessary to convert the feed handler to C.

Dashboards

Setting up polling queries in Dashboards was a fast process. The query editor was just like an editor for a .q file, making the implementation of analytics within a Dashboards data source very easy. This gave us a lot of time to ponder about and design the data projection layouts we deemed the most critical to view during each of the dynamic events at the FSAE competition.

One of the major advantages against traditional telemetry clients that receive and display incoming data within the context of a single application is the decoupling of the data reception and projection. With the kdb+ and Dashboards pipeline we are able to have a single computer receive telemetry from the race car, store it in memory, and then serve it to the Dashboards clients that may be local or remote. By setting up a WLAN network, multiple clients on different computers may look at any of the dashboards, thus preventing team members from needing to huddle around one computer screen.

Future applications of Dashboards include monitoring the lap times of other teams on track and recording those in the database. This can happen through mobile clients connected to the main Dashboards instance and buttons that execute queries that update our log of other teams’ times. Then, during our own race, we can have real time analytics that compare our lap times to other teams and allow the crew to inform our driver when to push or when to conserve energy.

MIT FSAE