Quantcast
Channel: Total Phase Blog
Viewing all 822 articles
Browse latest View live

How Many Promira Serial Platforms Can I Run Simultaneously and How Many SPI Target Devices Can Be Driven?

$
0
0

Question from the Customer:

We are looking to use two Promira Serial Platforms to drive multiple SPI target devices from the same computer.

  • How can we best control two Promira platforms? Which software applications would work best? We are running several tests and gathering a lot of data per test run.
  • How many target devices can each Promira platform drive?

Response from Technical Support:

Thanks for your question! The software application we recommend is the Promira Software API I2C/SPI Active.

Controlling Multiple Promira Serial Platforms

For your requirements, we recommend using the Promira queue implementation with the Promira API. The commands to multiple Promira platforms be can be queued and then sent out in a batch.

Queuing API

The Promira API can be used to create up to 127 queues. Each queue can contain up to 255 commands. Multiple queues can be submitted asynchronously, as long as the uncollected data is less than the internal Promira buffer size (2 MB).

In SPI Slave mode, the Promira platform can send multiple SPI bytes without delay in one transaction. To do so, the SS (slave select) must be asserted for the entire transaction.

The maximum amount of outstanding slave data to collect is 2 MB – 1. To avoid losing data, we recommend collecting the SPI slave data as soon as possible.

  • The maximum data size of single command is 1 MB.
  • The maximum amount of data in a queue is 64 MB – 1.

You can create several queues for SPI. After you submit the two queues to the Promira platform, each queue will operate as a separate sub-system. For more information, please refer to the subsection Queue Overview in the  Promira Serial Platform I2C/SPI Active User Manual.

Sample Promira API Programs

Several functional program examples are provided in the Promira Software API I2C/SPI Active package. Each program can be modified as needed for your requirements. The Promira API supports many program languages; the examples we are referring to are written in Python.

Here are two scripts that could be useful for your setup. Each of these scripts is based on spi_slave_py, which is provided in the Promira API package.

  • spi_stream_master.py: sends 32 kB packets with 3 ms intervals between the packets. This program can be further modified to send data to multiple Promira platforms.
  • spi_stream_slave.py: responds to spi_stream_master.py

Driving Target Devices

The Promira platform has the capability to drive up to eight loads at the same time. However, the maximum load may vary depending on the power requirements of the target devices. For details, please refer to the section Electrical Specifications of the Promira Serial Platform I2C/SPI Active User Manual.

How Master and Slave Lines Operate

When the SPI interface is activated as a master, the Slave Select line (SS) is actively driven low. The MOSI, SCK, IO2, and IO3 lines are driven as appropriate for the SPI mode. After each transmission is completed, these lines return to a high impedance state.

This feature allows the Promira Serial Platform, following a transaction as a master SPI device, to be reconnected to another SPI environment as a slave.

The Promira platform will not fight the master lines in the new environment.

  • It is advisable that every slave device also has passive pull-ups on the MOSI and SCK lines.
  • These pull-up resistors can be relatively weak – 100k should be adequate.

We hope this answers your questions. Additional resources that you may find helpful include the following:

If you want more information, feel free to contact us with your questions, or request a demo that applies to your application.

Request a Demo


How Does I2C Messaging Work?

$
0
0

The I2C serial communication protocol was first invented by the Philips Semiconductor company, now known as NXP Semiconductors, in 1982. Now approaching its 40th anniversary, the protocol offers an effective means of short-distance intra-board communication that is ideal for embedded systems and micro-computing applications where the primary design concerns are simplicity and low manufacturing cost.

For embedded systems engineers who wish to design products that use the I2C protocol, programming in the I2C language becomes a necessary skill set. In this week's blog post, we give a basic overview of how I2C messaging works. We will cover the I2C messaging protocol along with key features of the I2C protocol that make it ideally suited for use within your embedded computing projects.

How Does I2C Messaging Work?

We can begin our discussion of I2C with a basic overview of the protocol and its necessary components.

The name I2C is an abbreviation of the term Inter-Integrated Circuit. Here, the term "integrated circuit" essentially means "computer chip". There are many types of computer chips, including those used for processing (CPU), memory (RAM, EEPROM), and other functions. The concept of an inter-integrated circuit protocol tells us that the protocol will allow these individual chips to communicate with each other. In fact, this is exactly what the I2C protocol is used for.

The I2C protocol was designed to enable multiple slave devices (e.g. memory and other peripheral chips) to communicate with one or more master devices (e.g. microcontrollers) over short distances. Importantly, the I2C communication protocol allows for precise communication between multiple slave devices and more than one master device: up to 1008 individual devices can communicate across the same bus. The protocol features that make this possible are the two-wire configuration, slave addressing, and the defined messaging protocols associated with I2C.

The I2C communication protocol conveys messages using a two-wire configuration. The Serial Clock Wire, typically abbreviated SCL, synchronizes data transfer between master and slave devices on the I2C bus by transmitting a clock signal from master device to slave device. In I2C, clock signals are always generated by the master device. The second wire in I2C is known as the Serial Data wire, or SDA. This wire permits bidirectional data transfer between master and slave devices.

Basic diagram of the I2C communication protocol, including master and slave devices, SDA and SCL lines and two-wire configuration.

I2C Bus Protocol

A serial communication protocol is essentially a set of rules that defines a shared language and syntax for communication between one or more electronic devices. For any device that uses I2C, data transmissions will always be initiated by a master device. This is because master devices in I2C control the clock line, which synchronizes all data transfers over the bus.

Messages that originate at a master device on the I2C bus follow a predictable format:

  1. To transmit data, the master device must generate a START condition
  2. Following the START condition, a slave address is transmitted to indicate where the message will be sent.
  3. A single bit indicates whether the master device will read or write data from the slave.
  4. An ACK bit is used after each byte of data transferred to acknowledge receipt.
  5. Data is transferred in single-byte (8-bit) blocks, followed by an ACK bit.
  6. When the message concludes, a STOP condition denotes the end of the data transfer.

 

Start and Stop Conditions

The purpose of a communication protocol is to enable communication between multiple master and slave devices. Communication takes place in a series of messages that are transmitted across the SDA wire. When it is time to send a message, the system must generate a START condition that indicates the beginning of a message and later a STOP condition that signals the conclusion of the message - just like saying "Hello!" and "Goodbye!" at either end of a phone call.

In the I2C protocol, communication lines are "open-drain" and must be activated using a pull-up resistor. When it is time to send a message, a START condition is created by generating a high-to-low change in the SDA line. The pull-up resistor that keeps the SDA line pulled high is released and the line is pulled low - this must be done while SCL is pulled high. In contrast, a STOP condition is generated when the SDA line goes from low to high and the SCL is pulled low.

In any device that uses I2C, pull-up resistors are used to shift the device between four basic modes of operation. While some devices utilize all four, other I2C devices can make use of just one or two. The four potential operational modes are:

  1. Master device transmitting data to a slave device
  2. Master device receiving data from a slave device
  3. Slave device transmitting data to a master device
  4. Slave device receiving data from a master device

 

Slave Addressing

In the I2C protocol, slave addressing ensures that the correct slave devices are identified as recipients when a master device sends a message. The I2C protocol supports two different formats for slave addressing: 7-bit addressing and 10-bit addressing.

In 7-bit addressing, the address for the slave device is transmitted in the 7 bits immediately following the START condition. An 8th bit acts as a read/write indicator where a 0 indicates that the master wants to write information to the slave and a 1 indicates that the master wants to read information from the slave. 7-bit addressing would typically allow up to 128 devices with unique slave addresses on the bus, but with sixteen 7-bit addresses reserved for special functions the number is slightly lower at 112.

In 10-bit addressing, the full range of 10-bit addresses are available, meaning that up to 1024 slave devices can be connected to the system. Here, a special reserved address is used following the START condition to indicate the presence of a 10-bit address. Following the 10-bit address indicator, the first two bits of the address will be sent, then a read/write indicator, then an ACK. Once the slave acknowledges the master, the next byte transmitted by the master will contain the rest of the slave address.

We've also seen some slave devices that use an 8-bit addressing scheme. While these devices are not following typical I2C conventions, there are still methods of communicating with 8-bit addressed slave devices using the I2C protocol.

Conclusion

As your embedded I2C device increases in complexity, it can become more difficult to streamline your I2C programming and ensure bug-free operation. At Total Phase, we build products that help streamline the error diagnostic and debugging aspects of product development. With our Beagle I2C/SPI Protocol Analyzer, embedded engineers can gain enhanced insight into the internal workings of their devices, rapidly diagnose and fix coding errors, and build better products.

Request a Demo

How Do I Read/Write to a I2C EEPROM with 16-bit Data?

$
0
0

Question from the Customer:

I am using the Aardvark I2C/SPI Host Adapter and Aardvark Software API to program and read an I2C EEPROM device, the AT24C32. This I2C device has a 2-byte address and uses 2-byte data. Is there a way to use your API functions for that? Your API functions are designed for 8-bit data, but this device has a  16-bit address. The programming language I am using is C.

Response from Technical Support:

Thanks for your question! The API functions are designed for 8-bit data, but you can easily use them to read/write with 16-bit data. The API functions and an example are described below.

Using API Functions for 16-bit Data

The aa_i2c_write() and aa_i2c_read() functions have the u16 data type for 16-bit addresses.

Reading from the I2C slave device:

int aa_i2c_read (Aardvark            aardvark,

aa_u16              slave_addr,
AardvarkI2cFlags    flags,
aa_u16              num_bytes,
aa_u08 *            data_in);

Arguments

aardvark              handle of an Aardvark adapter
slave_addr          the slave from which to read
flags                     special operations
num_bytes         the number of bytes to read (maximum 65535)
data_in                pointer to data

Writing to the I2C slave device:

int aa_i2c_write (Aardvark         aardvark,

aa_u16           slave_addr,
AardvarkI2cFlags flags,
aa_u16           num_bytes,
const aa_u08 *   data_out);

Arguments

aardvark              handle of an Aardvark adapter
slave_addr          the slave from which to read
flags                      special operations
num_bytes         the number of bytes to write (maximum 65535)
data_out             pointer to data

These functions are designed for 8-bit data. However, 16-bit data can be sent byte-by-byte as 2 separate bytes.

Example to Read/Write 16-bit Data

Here is an example for using those commands to read/write 16-data data. Please note, this example is pseudo-code, not an actual program.

u08 addr[2];
int addr_byte = 1;

if (addr_t > 255) {

addr[0] = (addr_t >> 8) & 0xff;  // upper byte
addr[1] =(addr_t >> 0) & 0xff; // lower byte
addr_byte = 2;
}

else {

addr[0]= addr_t && 0xff;
addr[1] = 0;
addr_byte = 1;
}

aa_i2c_write(handle, device, AA_I2C_NO_STOP, addr_byte, addr);

For more information about API functions, including the special functions of the "flags", please refer to the subsection I2C Interface of the Aardvark I2C/SPI Host Adapter User Manual

We hope this answers your questions. Additional resources that you may find helpful include the following:

If you want more information, feel free to contact us with your questions, or request a demo that applies to your application.

Request a Demo

What is Enumeration and Why are USB Descriptors Important?

$
0
0

The USB Protocol

The USB protocol was introduced in 1996 as a way to institutionalize a more widespread, uniform cable and connector that could be used across a multitude of different devices. The idea was to simplify the connection of devices to and from a host computer. The protocol is currently maintained and regulated by the USB Implementors Forum or better know as USB-IF. This group sets standards that all USB devices must follow in order to be compliant with the technology and work properly across all USB compatible devices.

USB has a wide range of device types. Whether a user is plugging in a mouse and keyboard or connecting a flash memory drive, the process remains simple and seamless.  The USB specification was written specifically to provide this plug and play user experience. This experience is made possible by the USB enumeration process.

mouse and keyboard

Picture of mouse and keyboard by bongkarn thanyakij from Pexels

 

What is Enumeration?

Enumeration within a USB system is a process where the host detects the presence of a device, determines what type of device is connected, and defines the speed at which to communicate. This process is important because different types of USB devices will communicate or interact with the host differently.

Referring back to the computer mouse and flash memory device, once connected to a host, these devices will behave differently because they have different functions. A mouse is a human interface device designed to provide a method for a user to interact with the host.  This device operates entirely different from a memory device such as a flash drive. A flash drive is used to read and write data to and from the computer. The memory device and mouse are different USB devices, yet the host computer knows how to interact with each device because of the USB device descriptors sent to the host during the enumeration process. The enumeration process goes through nine main steps as seen in the graphic below.

 

USB enumeration process

USB Enumeration Process

 

What are USB Descriptors?

USB descriptors are presented to the host during the enumeration process. These descriptors tell the host what type of device is connected and how to properly communicate with it. There are four types of descriptors: Device Descriptors, Configuration Descriptors, Interface Descriptors, and Endpoint Descriptors.

 

Device Descriptor

Each USB device can only have a single Device Descriptor. This descriptor contains information that applies globally to the device, such as serial number, vendor ID, product ID, etc. The device descriptor also has information about the device class. The host PC can use this information to help determine what driver to load for the device.

 

Configuration Descriptor

A device descriptor can have one or more configuration descriptors. Each of these descriptors defines how the device is powered (e.g. bus-powered or self-powered), the maximum power consumption, and what interfaces are available in this particular setup. The host can choose whether to read just the configuration descriptor or the entire hierarchy (configuration, interfaces, and alternate interfaces) at once.

 

Interface Descriptor

A configuration descriptor defines one or more interface descriptors. Each interface number can be subdivided into multiple alternate interfaces that help more finely modify the characteristics of a device. The host PC selects a particular alternate interface depending on what functions it wishes to access. The interface also has class information which the host PC can use to determine what driver to use.

 

Endpoint Descriptor

An interface descriptor defines one or more endpoints. The endpoint descriptor is the last descriptor in the configuration hierarchy and it defines the bandwidth requirements, transfer type, and transfer direction of an endpoint. For transfer direction, an endpoint is either a source (IN) or sink (OUT) of the USB device.

 

Loading USB Drivers

Once these descriptors are shared, the host then loads the proper drivers needed to operate the device. USB driver files are installed on the host and provide the information that allows proper communication with the variety of USB device classes. These driver files provide the details to interpret the device descriptors to seamlessly interact with the USB device. Most USB devices are from a common specification-defined class and can rely on a standard driver that is automatically pulled and installed without the user knowing it.  However, with custom USB class devices, the user may need to download a special driver, provided by the developer, to successfully communicate with the USB device.

 

List of USB class types

USB Device Classes

 

Why is USB Enumeration Important?

The enumeration process is a vital part of the USB architecture. Since USB has such a diverse set of device types, it is important that the host has a way to define how to communicate properly with each individual device. The enumeration process ensures that every USB connected device is recognized by the host to ensure proper data transfer. Without enumeration, a host is unable to define the device type and what type of data transfer to use, the interval at which to send data, or even the speed of communication.

 

Debugging the Enumeration Process

Total Phase has a variety of USB tools to help developers debug the enumeration process. In most USB debugging, developers find themselves running into issues when it comes to their devices not communicating with the host device as intended.  Total Phase offers a line of Beagle USB Protocol Analyzers that can help provide information on why the enumeration process may corrupted.

All of the Total Phase Beagle USB Protocol Analyzers capture and trace the enumeration process with great detail. Whether working with Low or SuperSpeed USB there is an analyzer to help. One of the most common Total Phase USB protocol analyzers is the Beagle USB 5000 v2 SuperSpeed Protocol Analyzer.

 

Beagle USB 5000 v2 SuperSpeed Protocol Analyzer

Beagle USB 5000 v2 SuperSpeed Protocol Analyzer

 

The Beagle USB 5000 v2 SuperSpeed Protocol Analyzer non-intrusively monitors SuperSpeed/High-/Full-/Low-Speed USB traffic up to 5 Gbps. The Standard edition can monitor either USB 2.0 or USB 3.0 traffic, while the Ultimate edition can monitor USB 2.0 and USB 3.0 traffic simultaneously. This analyzer offers real-time display, search, and filtering of captured data, along with descriptor and USB class decoding. It also offers users the ability to perform USB 2.0/USB 3.0 advanced triggering with flexible state-based conditions on data patterns, packet types, error types, events, and other criteria. Additionally, it provides enhanced visibility into the USB 3.0 bus, detecting low-level bus events including link training, LFPS polling, training sequences, and provides a view into the LTSSM which tracks upstream and downstream link state transitions.

 

Descriptor Decoding on the Beagle USB v2 5000 Protocol Analyzer

The Beagle USB v2 5000 analyzer captures early, low-level bus events including the enumeration process. Combined with the Data Center Software, users can see all of the enumeration information in decoded format. For instance, take a look at the information below of the captured from the Beagle USB 5000 v2 analyzer and displayed by the Data Center Software from a USB computer mouse. The Data Center software logs information on all four of the main USB descriptor data packets as seen in the images below.

Device Descriptor in the Data Center Software

Device Descriptor in the Data Center Software

Configuration Descriptor in the Data Center Software

Configuration Descriptor in the Data Center Software

Interface Descriptor in the Data Center Software

Interface Descriptor in the Data Center Software

Endpoint Descriptor in the Data Center Software

Endpoint Descriptor in the Data Center Software

Understanding the enumeration process is one of the single most important aspects of the USB protocol. It is also one of the most challenging aspects of implementing a USB device and can quickly cause problems in the development cycle.  The proper enumeration of USB devices is an absolute must when it comes to USB and having the right tools to debug is essential.

For more information on how Total Phase interfaces with the USB bus, check out our USB product page. To see one of our USB analyzers in action, check out this video of our Beagle USB 5000 v2 Protocol Analyzer sniffing a USB 3.0 flash memory device.

 

How Do I Configure GPIO Pins Using the Komodo Software API?

$
0
0

Question from the Customer:

I am using a Komodo CAN Duo Interface with the Komodo Software API. I am trying to control the GPIO pins for output signals. I think I have done everything necessary, but the GPIO pins are not responding as expected. With my program, pin 4 stays low like this:

Functions:

Setting Pin 1 to logic low
Setting Pin 4 to logic low

Output:

Pin 1 is low
Pin 4 is low

Functions:

Setting Pin 1 to logic high
Setting Pin 4 to logic high

Output:

Pin 1 is high
Pin 4 is low

I used the Total Phase example file gpio.py and made some changes for the functions I need. Here are sections of the code used for setting up the Komodo interface and configuring the GPIO:

# Acquire features to control and listen CAN A

ret = km_acquire(km, KM_FEATURE_CAN_A_CONFIG  |
                 KM_FEATURE_CAN_A_CONTROL |
                 KM_FEATURE_CAN_A_LISTEN  |
                 KM)

print('\nSetting Pin 1 to logic High')
km_gpio_set(km, 1, KM_GPIO_PIN_1_MASK)

print('\nSetting Pin 4 to logic High')
km_gpio_set(km, 1, KM_GPIO_PIN_4_MASK)

Can you take a look and tell me what I left out or need to change?

Response from Technical Support:

Thanks for your question! Your challenge is with the API call int km_gpio_set (Komodo komodo, u08 value, u08 mask), and it’s very easy to fix. Both the value and the mask are bit fields. We’ll provide some details about the values, followed with an example of what is needed to make your program work.

GPIO Configuration and Mask Fields

Here are the values of the parameters for int km_gpio_set(), which were extracted from the header files:

/* GPIO Configuration */
#define KM_GPIO_PIN_1_CONFIG 0x00
#define KM_GPIO_PIN_2_CONFIG 0x01
#define KM_GPIO_PIN_3_CONFIG 0x02
#define KM_GPIO_PIN_4_CONFIG 0x03
#define KM_GPIO_PIN_5_CONFIG 0x04
#define KM_GPIO_PIN_6_CONFIG 0x05
#define KM_GPIO_PIN_7_CONFIG 0x06
#define KM_GPIO_PIN_8_CONFIG 0x07
/* GPIO Mask */
#define KM_GPIO_PIN_1_MASK 0x01
#define KM_GPIO_PIN_2_MASK 0x02
#define KM_GPIO_PIN_3_MASK 0x04
#define KM_GPIO_PIN_4_MASK 0x08
#define KM_GPIO_PIN_5_MASK 0x10
#define KM_GPIO_PIN_6_MASK 0x20
#define KM_GPIO_PIN_7_MASK 0x40
#define KM_GPIO_PIN_8_MASK 0x80

Mask for GPIO

To turn on GPIO pin 4, what you have is:

km_gpio_set(km, 1, KM_GPIO_PIN_4_MASK)

All you need to do is add the mask as shown below:

km_gpio_set(km, KM_GPIO_PIN_4_MASK, KM_GPIO_PIN_4_MASK)

We hope this answers your questions. Additional resources that you may find helpful include the following:

If you want more information, feel free to contact us with your questions, or request a demo that applies to your application.

Request a Demo

What are USB Classes and Why Do I Need Class Decoding?

$
0
0

What are USB Classes?

Universal Serial Bus (USB) has become the most widely used standard interface for connecting peripheral devices to a host computer.  A key advantage and differentiator of USB is the fact that any standard external USB device will instantly connect once plugged into a host computer by way of USB classes.

USB classes can be defined as groups of similar devices, such as Audio, Human Interface Devices (HID), and Mass Storage, that use a standard set of commands which allow them to share a common USB class driver.

USB then defines class code information that is used to identify a device’s functionality and to load a device driver based on that specific functionality. This class-level decoding becomes an integral part of interfacing and debugging over USB.

Why is USB Class-Level Decoding Important?

Class-level decoding translates low-level USB protocol level data to USB class-level commands and instructions that are more easily understood by the end user.

The ability to understand class-level data helps the engineer more easily isolate potential errors and bugs within the USB 2.0 and 3.0 protocols. Raw packets can also be parsed into human readable format in real time when using certain tools and applications.

Which Solutions can Support Real-Time USB Class-Level Decoding?

USB protocol analyzers with interactive GUIs such as those from Total Phase can make USB debugging faster and more efficient through class-level decoding.

The powerful USB class-level decoding feature is part of Total Phase’s Data Center Software, which allows users to streamline and expedite the analysis process in-real time. To use this feature, simply start a capture, plug in a USB device to the Beagle USB 480 Protocol Analyzer or Beagle USB 5000 v2 SuperSpeed Protocol Analyzer, and the software will automatically decode protocol level packets into class-level decoded data.

Here is a comparison between the protocol-level view and the class-level view.

USB traffic in protocol-level view Protocol-Level View

The USB data from a Mass Storage device has been organized into packet groups. The data is in its raw format which is difficult to understand.

USB traffic in class-level view Class-Level View

The USB traffic has been organized into hierarchical Mass Storage specific data groups. Now that the class-level data is decoded, it is easier to understand.

Supported Classes:

The Data Center Software supports the following classes and more:

  • Audio v1.0 - v2.0
  • Communications Device Class (CDC)
  • Printer
  • Mass Storage (SCSI, UASP)
  • Human Interface Device (HID)
  • Video (v1.0 – v1.1)
  • Device Firmware Update (DFU)
  • Network Control Model (NCM)
  • Mobile Direct Line Model (MDLM)
  • Hub
  • Still Image (MTP, PTP)

 Example Features:

Info and Data Panes

The image to the right shows  USB 2.0 traffic from an HID Device. The class-level data has been decoded and the Info Pane at the far right displays the parsed class-level fields for easy viewing.

When a field is highlighted in the Info Pane, the relevant portion of the data payload is highlighted in the Data Pane on the bottom.

Benefits from Info and Data Pane View:

  • Class-level data is parsed into a human-readable format.
  • Class-level fields in the data payload are clearly displayed.
Info and Data Pane View in Data Center Software

Class-Level Decoding Availability

Which Total Phase tools support class-level decoding capabilities?

For further information on our USB products, please visit our USB Product Comparison chart or email us at sales@totalphase.com.

How Does the Beagle USB 5000 v2 Protocol Analyzer Store Data Beyond Its Internal Buffer?

$
0
0

Question from the Customer:

I am looking at the Beagle USB 5000 v2 SuperSpeed Protocol Analyzer - Standard Edition, which supports both USB 2.0 and USB 3.0 at a reasonable cost. For some cases, I need to store large amounts of data; some tests run for days. How can I store data that exceeds the size of the internal memory buffer?

Response from Technical Support:

Thanks for your question! The Beagle USB 5000 v2 analyzer – Standard Edition contains a 2 GB USB 3.0 memory buffer. For a larger internal buffer of 4 GB, you may consider the Ultimate Edition. An additional buffer supports storing 128 MB of USB 2.0 data, which can be used in parallel with the USB 3.0 buffer. We will explain how the buffer works and two ways you can maximize the data stored.

How the Memory Buffer Works

The internal memory acts a temporary FIFO storage buffer. When a trigger condition occurs, the captured data is streamed from the Beagle USB 5000 v2 analyzer to the analysis computer over the High-speed USB downlink. (Note: On the Ultimate Edition, the downlink is upgraded to SuperSpeed.) The internal buffer is constantly emptied, freeing up the hardware memory for capturing more data. When using the Data Center Software with the Beagle USB 5000 v2 analyzer, captured data can be continuously streamed to the RAM of the computer. The only limitation to capture size is the amount of available RAM on the analysis computer.  Typically, up to 80% of the computer’s RAM can be used to store data.

There are two ways to adjust the amount of data that can be stored: use Triggering and Complex Matching with the hardware circular buffer, or use Beagle Software API to deliver captured data to the hard drive of the computer.

Triggering Data and Complex Matching

Using the Match/Action Triggering and Filtering features, you can capture and store only relevant data. This way, the computer RAM is used more efficiently. These features are provided in the Data Center Software.

With Complex Match, you set up a trigger condition and start the capture.

  • The analyzer runs in a “pre-trigger” mode until the trigger condition is met.
  • In the pre-trigger mode, the analyzer captures traffic in a circular hardware buffer on the Beagle USB 5000 v2 analyzer. The internal memory behaves as a circular buffer; only the most recent records are retained.

When the trigger condition is met, the Data Center Software downloads the data in the pre-trigger buffer and starts capturing traffic from that point on. This method of data capture will continue until all the available space of the computer RAM is used.

Create a USB Complex Match and Trigger a Capture with the Beagle USB 5000 v2 Analyzer

Here is a video that demonstrates using the Match/Action Triggering and Filtering features:

Storing Captured Data on the Hard Drive

The Data Center Software does not support streaming data to the hard drive. However, you can create a storage application with the Beagle Software API. The API supports Windows, Linux, and Mac OS X operating systems and multiple software languages. Functional examples are provided that can be used as-is or modified to meet your specifications. For details, please refer to the API Documentation section of the Beagle Protocol Analyzer User Manual.

We hope this answers your questions. Additional resources that you may find helpful include the following:

If you want more information, feel free to contact us with your questions, or request a demo that applies to your application.

What is a Human Interface Device (HID)?

$
0
0

If you're an embedded systems engineer, you may want to build a product that either takes input from a human operator or delivers outputs to an operator through a human-readable interface. When such functions are to be implemented, it is common for engineers to use a special type of communication protocol known as the Human Interface Device (HID) protocol.

The HID standard specification was first proposed by Mike Van Flandern, a Microsoft employee who worked closely with the committee responsible for developing the USB standard specifications. The proposal required the establishment of a Human Input Device working group with the goal to standardize and simplify the process of installing many different kinds of computer input and output devices across an equally diverse range of machines and operating systems.

In this week's blog post, we're taking a deep look at human interface devices and the HID standard. We'll look at where the standard and specifications came from, how they are used, and the most common types of human interface devices whose operation is streamlined by the HID standard.

What is a Human Interface Device (HID)?

A human interface device can be defined as a type of computer device that ordinarily takes input from humans and gives output to humans. While the original plan for the HID specification was based on the idea of a standard for human input devices, the acronym itself was changed to human interface device as it became clear that the standard would support bi-directional communication, meaning both inputs and outputs.

Prior to the creation of the HID standard, various types of input devices were required to conform to highly specific protocols that varied by the type of device. The communication protocol for each device supported only a relatively narrow set of functions. For example, the standard protocol for a mouse supported X- and Y-axis movement and input from two buttons. This left hardware developers rather restricted in terms of what kinds of peripheral I/O device designs could be supported by computers. Manufacturers either had to overload an existing protocol with excess data, write their own customized device drivers, or conform to the existing protocol (which could stifle innovation).

The impact of the HID standard was that it removed many of the design limitations that were faced by peripherals manufacturers and enabled plug-and-play for most devices without the need to program a unique or customized driver for each new device.

How Does USB Human Interface Device (HID) Work?

The HID protocol defines two separate entities that are involved in data transfer: the host and the device. By convention, the device is the object that is in direct interface with a human operator, such as a computer mouse or a keyboard. The role of the host, then, is to receive inputs from the device and transmit outputs to the device. A human operator will be responsible for triggering device inputs and receiving device outputs. The most generic host device is a personal computer, but mobile phones, tablets, and other systems may also act as the host under the USB HID standard.

The USB HID protocol allows a human interface device to define its data packets in the form of an HID descriptor, which is then transmitted to the host. A HID descriptor is hard-coded into the device and provides the host with information about the packets that the device will send, including how many packets are supported, what the size will be, and what information will be contained in each bit or byte within packets. The host machine then parses the HID descriptor, enabling it to receive and accurately interpret packets from the device in a way that reflects the desired user input.

Mac Computer and KeyboardImage courtesy of Unsplash: The Human Interface Device (HID) standard allows users to enjoy plug-and-play functionality from almost any I/O device, starting with the basics: a keyboard and a mouse.

Common Types of HID Devices

The original goal of the HID standard specification was to enable simple installation and usage for a variety of computer input and output devices. As such, there are many different types of HID devices whose operation is vastly improved through the use of USB HID. Some of the most common types of HID devices include:

  • Keyboard - The character produced by pressing any key on the keyboard is determined by the keyboard driver software. Keyboards that use the USB HID standard will be hard-coded with an HID descriptor that maps individual key presses to the appropriate character.
  • Refreshable Braille Displays- Braille displays use a series of round-tipped pins that can be raised or lowered, enabling users with visual impairments to read the textual outputs of the host machine.
  • Mouse - The introduction of the HID standard allowed electronics companies to design mouses with more versatile functions than before.
  • Graphics Tablet- A graphics tablet is used by digital graphic artists to draw by hand directly into the digital medium. This function is supported by the HID standard and related capabilities.

In addition, gaming controllers, joysticks, touchscreen monitors, magnetic strip readers, fingerprint scanners, and sound output devices like headsets and speakers are all HID devices. Today, these devices can typically be hot-plugged to your PC and used immediately with minimal set-up because of the HID standard.

What are SMBus and HID Over I2C?

I2C and SMBus are two-wire serial communication protocols that are compatible with each other, especially at the 100kHz data transfer speed for which both are suited. While the HID protocol was originally targeted for devices that use USB or Bluetooth connectivity, it can also be implemented using other protocols like I2C and SMBus.

When Windows 8 was released, Microsoft wrote a specialized HID miniport driver that enabled communication between I/O devices and Systems-on-Chip (SoC) over an I2C bus. HID over I2C demonstrated an 87-99% reduction in power consumption when compared to HID over USB, making it useful for host machines with a limited power supply, such as mobile phones or tablets.

Conclusion 

Embedded systems are frequently deployed in contexts where humans do not interface with them on a regular basis. In some cases, they may be expected to operate for years or decades at a time with minimal human interference. Still, there are some cases where an embedded system may require human inputs and may be expected to communicate outputs to humans through a human interface device. In these cases, implementation of HID over USB or HID over I2C ensures rapid installation and seamless communication between the host machine and human interface devices.

For embedded engineers building HID products, Total Phase creates innovative testing and debugging tools that streamline diagnostic processes, saving you time and money.

Request a Demo

 


How Do I Set Up the Komodo CAN Duo Interface to Run Preliminary Tests on the CAN Bus?

$
0
0

Question from the Customer:

I am starting to use the Komodo CAN Duo Interface with the loopback test that was provided with the Komodo Software API. I have tried many times, but I cannot get the test to run, and it’s not consistent: sometimes port A or port B will not enable; sometimes the write command fails.  Am I missing something? What do I need to get this function to work?

Response from Technical Support:

Thanks for your question!  We will review getting started with the Komodo CAN Duo Interface, then go into details about running the loopback test.

Initializing the Komodo CAN Duo Interface

For your setup, we recommend following the steps in the document Komodo Installation steps for Windows, which uses the Komodo GUI Software. Here’s a summary of the instructions:

  1. Follow the instructions provided in the Komodo CAN Duo Interface Quick Start Guide.
  2. To ensure a clean start, uninstall and reinstall the USB drivers and the Komodo GUI Software.
  3. Using the Komodo GUI Software, send and verify CAN messages between the Komodo channels.

Running the API Loopback Test

Here are guidelines for running the API loopback test. Note that a termination resistor is part of the setup.

  1. Connect the DB-9 CAN A port to the DB-9 CAN B port on the Komodo interface.
  2. Connect the USB port on the Komodo interface to the computer USB port.
  3. Connect a 1k ohm resistor between CAN+ and CAN- in the Komodo block screw terminal for the CAN A port.
  4. Open a terminal on the computer.
  5. Enter the command detect.
  6. Verify that the two ports A and B are available. Following is an example of the output from the detect
Searching for Komodo Interfaces...
2 ports(s) found:
port=0 (in-use) (1644-328845)
port=1 (in-use) (1644-328845)
  1. Enter the command loopback.
  2. Verify that the two ports A and B receive the correct data. Following is an example of the output from the loopback command:
Features for CAN A: 0x30, CAN B: 0x1c0
Bitrate for CAN A set to 125 kHz, CAN B set to 125 kHz
Timeout for CAN B set to 1000 ms
Enabled target power for CAN A and disabled for CAN B
Sent data: [ 1 2 3 4 5 6 7 8 ]
Received data: [ 1 2 3 4 5 6 7 8 ]
Verifying data... PASS

Why the Termination Resistor is Needed

From the intermittent failures you experienced, it is likely there is an impedance mismatch. To avoid improper termination, we recommend connecting a resistor between CAN+ and CAN- in the Komodo block screw terminal of the CAN A port.  The value of the resistor can vary per system. Some customers have been successful with 120 ohm resistors, others have been successful with 1k ohm resistors.

We hope this answers your questions. Additional resources that you may find helpful include the following:

If you want more information, feel free to contact us with your questions, or request a demo that applies to your application.

Request a Demo

What is a CRC (Cyclic Redundancy Check)?

$
0
0

Successful communication between devices is key to having a properly functioning embedded system. Embedded systems rely on and function using protocols, or a set of rules that govern the transmission, synchronization, and error checking of data sent and received between devices. Because the protocol is an essential component to a working embedded system, it is crucial that it operates properly. Because communication errors do occur, many protocols, including USB, CAN, and A2B, include error checking mechanisms such as a Cyclic Redundancy Check, or a CRC.

A CRC is used to flag corrupt data and prevent it from being sent over the bus. With today’s protocols often supporting higher bandwidths and speeds, the CRC is fundamental to keeping data clean and reliable within an embedded system. In this article, we’ll cover the different ways the CRC is used in various protocols and how Total Phase tools help spot communication errors in such events.

CRC in Communication Protocols

Communications protocols often use two CRCs in a packet - one to protect the header of the packet and another to protect the data portion of the packet. While the implementation of the CRC varies between protocols, the purpose remains the same – to create a method for the system to detect errors and initiate a request to retransmit the data or ignore it.

How does the CRC get generated and how does it work? It is all based on an algorithmic calculation that is used to detect inconsistencies between the data being transmitted and received. Essentially, the CRC is a value calculated from a number of data bytes to form a unique polynomial key which is appended to the outgoing message. The same process is performed on the receiving end. The receiver then divides the message by the same polynomial the transmitter used and if the result of this division is zero, then the transmission was successful. However, if the result is not equal to zero, this would indicate that an error occurred.

CRC in the USB Protocol

The USB protocol, or Universal Serial Bus, uses Cyclical Redundancy Checks during transmission to protect all non-PID fields in token and data packets from errors. In USB 2.0, the token and start-of-frame (SOF) packets include a 5-bit CRC (CRC5), while the data packet includes a longer 16-bit CRC (CRC16) to provide adequate support for data payloads reaching up to 1024 bytes.

In USB 3.1 packets, the CRC can be found in the header packet, which consists of the header packet framing, packet header, and link control word. The header is protected by a 16-bit CRC (CRC16) and the link control word is protected by a 5-bit CRC (CRC5). The Data Payload Packet includes a 32-bit CRC (CRC32) to accommodate the large data payloads. Additionally, the Link Command Packets that are used to control various link-specific features also include a 5-bit CRC (CRC5).

USB Data Packet

USB 2.0 data packet including a 16-bit CRC

USB Start-of-Frame Packet

USB 2.0 Start-of-Frame packet including a 5-bit CRC

USB Token Packet

USB 2.0 token packet including a 5-bit CRC.

CRC in the CAN Protocol

The CAN protocol, or Controller Area Network, is known for its robust and reliable communication, as it contains multiple error checking mechanisms, including bit error detection, format error detection, fill error detection, response error detection, and CRC error detection. The CRC fields are contained within the data and remote frames.

The CRC error detection works by including a 15-bit CRC in the data frame to verify that messages are properly sent over the bus. Like discussed previously on how CRC operates, the transmitting node calculates a 15-bit CRC value and then transmits this value in the CRC field. All nodes will receive this message, calculate a CRC reciprocally, and compare the values to determine if they are indeed the same. If not, the receiving nodes will send an Error Frame over the bus. Additionally, the CAN protocol includes a recessive CRC Delimiter at 1-bit which helps prevent form errors and ensures that the bits are properly broadcasted on the bus and received correctly on the receiving end.

CRC in the A2B Protocol

The A2B protocol, or Automotive Audio Bus, is another protocol that uses error checking mechanisms to verify proper communication. One of the measures includes CRC that is used within specific frames to help detect errors over the bus.

A2B Superframe

A2B Superframe including Synchronization Control and Response Frames

The Synchronization Control Frame (SCF) acts as the control frame for nodes, or the control header, and the Synchronization Response Frame (SRF) acts as the response frames from nodes, or the response header. The entire A2B frame structure is known as a Superframe, which starts with an SCF, includes optional data slots, and ends with an SRF. These frames both include cyclic redundant codes (CRC) to help detect upstream and downstream data errors.

For the downstream data error detection, a 16-bit CRC is used within the SCF where it determines any SCF data errors occurring during transmission on the receiving side. The SCF includes a preamble that indicates the start of a Superframe and provides a bit pattern used by slaves for clock and frame synchronization. If the slave does not detect a frame sync, the slave will indicate a CRC error.

For the upstream data error detection, a 16-bit CRC is also used within the SRF to determine any SRF data errors occurring during transmission on the receiving side. The interrupt request fields have an additional CRC inside the SCF to avert false interrupts from being triggered. The SRF also has a preamble to indicate the start of a response frame, and provides a bit pattern used by upstream nodes for clock and frame synchronization. If the upstream node does not detect the frame sync, CRC errors will be indicated.

Detecting CRC Errors with Total Phase Tools

Total Phase offers numerous debugging tools for I2C, SPI, USB, CAN, eSPI, and A2B protocols that can be used in conjunction with the Data Center Software to capture, decode, and analyze data occurring on the bus in true real time. The USB, CAN, and A2B protocols that incorporate Cyclic Redundancy Checks can benefit from our tools to capture and analyze data and bus errors in a variety of ways.

Our line of Beagle USB protocol analyzers, Komodo CAN Interfaces and the A2B Bus Monitor used in conjunction with the Data Center Software allows users to detect, flag, and filter data errors, making it easy to evaluate errors detected by CRC.

The Data Center Software also incorporates numerous ways to interact with USB data specifically, including performing advanced triggers using USB CRC conditions. Depending on the Beagle USB Protocol Analyzer tool, users can perform simple and complex USB 2.0 and USB 3.0 triggers. Simple triggers for USB 2.0 and USB 3.0 can trigger on high-level packet types (including header packets, data payload packets), data patterns, and CRC errors, while complex triggers for USB 2.0 and allow users to match on specific state-based transactions, errors, events, and timers. Complex triggers for USB 3.0 include triggering on a specific packet type or data pattern, in addition to bus events and timers.

Below are examples of the Complex Data Match Configurations for USB 2.0 and USB 3.0 data that are based on Error Packet Types.

USB 2.0 Error Match Action Unit      

USB 3.0 Error Data Match Action Unit

This allows users to match on any packet type which exhibits an error. For USB 2.0, this can match on specific error types on any packet that appears on the bus, including matching criteria include CRC errors, corrupted PIDs, jabber, and general PHY receive errors. For USB 3.0, the errors which can be matched are a CRC error, framing error, or any unknown packet.

For more information on how our tools can help analyze data and bus errors, including CRC errors, please contact us at sales@totalphase.com.

Request a Demo

What Causes PHY Errors in USB 3.0 and How Can I Correct Them?

$
0
0

Question from the Customer:

I am working with Active Optical Cables (AOCs), and I am using the Beagle USB 5000 v2 SuperSpeed Protocol Analyzer - Ultimate Edition. I am having a problem diagnosing one of the USB 3.0 Standard-A to USB Micro-B cables with an industrial USB 3.0 vision camera (JAI GO-5000C-USB).  When I connect the camera directly to my PC with an AOC, it works without any issues. However, when I connect and use the Beagle USB 5000 v2 analyzer between the camera and the PC, the camera continuously restarts and never successfully connects.

Do you have any tips for establishing a successful connection using active cables? I see there are several PHY (physical layer) errors on the trace – what causes those errors?

Response from Technical Support:

Thanks for your questions! PHY errors often occur during USB training while the link is being established but should not occur afterwards.

The Beagle USB 5000 v2 analyzer has an active front-end circuitry between the USB 3.0 host and the USB 3.0 device that re-transmits the signal between the host and device as it passes through the analyzer. When the link comes out of idle state, it is typical for PHY errors to occur while the link is being established during training.  Once the link is established, PHY errors should be uncommon. Most likely, electrical inconsistencies cause PHY errors to occur after USB training.

We will start with explaining USB training, then finish off with our recommendation for establishing the connection for your setup.

USB 3.0 Training

The Low Frequency Periodic Signaling (LFPS) is used by USB ports to communicate across a link that is under training. The link is in either a warm reset state or a low power state. The LFPS type is determined based on the burst and repeat times of a signal, as well as the LTSSM state.

To establish a link, USB 3.0 has training that includes the TSEQ, TS1, and TS2 states. During the training period, each link sends training sequences; both upstream and downstream devices send TSEQ, TS1, and TS2 sequences in that order. The training period begins when one link sends TSEQ, and is completed after both links send their last TS2. Here is a summary of what happens:

  1. The link first sends multiple TSEQ states.
  2. Then the link sends TS1 states.
  3. After the link has a specific number of clean TS1, the link starts to send TS2.

The link is usually trained in 1-2 us. The time that it takes for the uplink to be trained is measured from the time that the first TS1 state of both links appears, until the time of the first downlink TS2 sequences.

How PHY Errors Occur During Training

It is normal for some PHY errors to occur during the training period. This is because the transceiver clock (in the transmitter link) and the receiver clock (in the receiver link) are different. During the training period, the receiver clock is compared to the transmitter clock.

Initially, the receiver clock is not yet trained to the transmitter clock.

  • If the receiver clock is faster than the transmitter clock, then the receiver link adds sub symbols.
  • If the receiver clock slower than the transmitter clock, then the receiver link misses symbols.

After the training period has completed - after the last TS2 state, PHY errors are no longer expected.

How the Analyzer Manages PHY Errors

The Beagle USB 5000 v2 analyzer handles errors that occur during and after training.

PHY Errors during Training

When the Beagle USB 5000 v2 analyzer detects some PHY errors during training period, they are not marked in red on the display. When the Beagle USB 5000 v2 analyzer detects a significant number of PHY errors during training period, more than expected, then the PHY errors are marked in red.

PHY Errors after Training

Sometimes, a PHY Error is a special-case bus event that matches the following error conditions:

  • Disparity Error
  • Elastic Buffer under-run
  • Elastic Buffer over-run
  • 8b/10b Decode Error

Although the PHY Error collapses these 4 errors into a single match, it is possible to distinguish some of the different errors in the captured data.

  • When an elastic buffer under-run error occurs, an EDB symbol (K28.3) is inserted into the data stream to fill the under-run.
  • When an 8b/10b Decode Error occurs, a SUB symbol (K28.4) is substituted in place of the bad 10b symbol in the data stream.

Here is a video that could be helpful for troubleshooting: Using Data Center Software's LTSSM View for USB 3.0 Debugging.

PHY errors can also occur due to electrical inconsistencies.

Recommendation for Correcting PHY Errors

For resolving electrical inconsistencies, we recommend placing a self-powered Superspeed hub between your Beagle USB 5000 v2 analyzer and your target device/host.

A hub in the setup would act as re-driver for the signals, which could maintain the consistency of signal integrity. If you need more information about signal performance, we recommend contacting the cable manufacturer. There may be issues with a cable that does not fulfill the requirements and specifications of your setup.

We hope this answers your questions. Additional resources that you may find helpful include the following:

If you want more information, feel free to contact us with your questions, or request a demo that applies to your application.

Request a Demo

An Overview of SMBus Functions

$
0
0

System Management Bus Protocol, also known as SMBus, is a two-wire protocol that supports basic communication functions, often within computer motherboards. Defined by Intel and Duracell in 1994, the standard has grown steadily in usage due to its functional benefits and compatibility with the existing I2C two-wire protocol. Today, the SMBus standard is maintained by technologists at the System Management Interface Forum, a forum for early adopters of communication protocols with a mission to support compatible technologies in power management.

In this week's blog post, we're taking a deep look at the SMBus protocol and its functions. We'll explain the features and five most basic functions of the protocol while paying special attention to the various types of messages that can be passed between master and slave devices using SMBus.

What is SMBus Protocol?

A serial communication protocol establishes a common language and syntax for communication between devices. Before we can understand the contents of the messages that pass between master and slave devices on the bus, we must first understand the underlying structure of the messages and their component parts.

Data transfers using the SMBus protocol originate from a master device. They begin with the establishment of a start condition, after which the master device transmits a 7-bit destination address for the data transfer. Messages on the bus can be addressed to one or more slave devices. Following the 7-bit slave address, a final bit of data is transmitted as part of the message. This may be known as the Read/Write Bit (Rd/Wr Bit), and while it does not form part of the slave address, it does serve some special functions in SMBus messages.

Next, the addressed slave devices respond with an ACK - an acknowledgement byte that indicates the original message was received. Once the master and slave devices have made contact in this way, a variety of additional functions become available. These, we will discuss in the section below.

Circuit Board

Image courtesy of Unsplash: Computer motherboards are a common use case for the SMBus protocol.

 

What Are the Main SMBus Functions?

SMBus functions may also be referred to as SMBus protocols. The most recent version of the System Management Bus (SMBus) Specification Version 3.0 outlined thirteen separate protocols for message transfer using SMBus. Below, we summarize the five main protocols of SMBus and describe how they are executed in interactions between master and slave devices.

Quick Command

A quick command is the simplest type of command that can be transmitted using SMBus. All of the information for the command is contained a single bit - either a 1 or a 0. To send a quick command, the master device generates a Start condition and addresses the command to the appropriate slave device(s) with a 7-bit address. The bit following the slave address, known as the Rd/Wr Bit, contains either a 1 or a 0 which may turn a device on or off, or enable/disable a device feature. When the command is received, the slave device sends an ACK bit to acknowledge as such and a Stop condition is generated by the master device.

Send Byte

To send a byte of data, the master first generates a start condition and addresses the message to a slave device. The Rd/Wr bit will have a value of 0, indicating to the slave that the master wants to send. Once an ACK bit is received, the master will send a single byte (8 bits of data) to the slave device. The byte may contain any of up to 256 encoded commands. Following the data transfer, the slave replies with an ACK bit and the master device will generate a Stop condition.

Receive Byte

Receiving a byte is essentially the opposite of sending a byte, and thus, many of the steps are identical. The main difference is that the Rd/Wr bit will contain a value of 1 indicating that the master device wants to receive data instead of send it. The slave will send an ACK bit, then transfer a byte of data from the slave to the master device. When the data byte is received, the master device will respond with a NACK bit before generating a Stop condition and ending the data transfer.

Write Byte

The write byte function begins in a familiar way, with the generation of a Start condition, slave addressing, and the Rd/Wr bit set to 0, indicating the master wants to write a message. Once the slave acknowledges the packet, two bytes will be sent by the master. The first one contains the command code and is followed by an ACK bit from the slave. The second byte will contain 8 bits of data, and will be followed by an ACK from the slave device before a Stop condition is generated by the master.

Read Byte

The read byte function makes use of a special SMBus protocol feature called a repeated Start condition. First, the master device creates a Start condition, then addresses the initial message using a 7-digit slave address. The Rd/Wr bit here is initially set to "Write" (0). Once the slave acknowledges the message, a command code is transmitted indicating that the master device wants to read data from the slave device - this is where things get tricky.

When the command code is acknowledged by the slave device, the master device generates a repeated Start condition and addresses the slave again - only this time, the Rd/Wr bit is set to Read (1). The master is now ready to read data from the slave. The slave will send an ACK bit and either one or two bytes of data. Once the transfer has concluded, the master device replies with a NACK bit and a Stop condition will be generated to terminate the transmission.

What other Functions are Supported in SMBus?

In addition to these basic five, there are eight more functions supported in the SMBus protocol:

  • Process Call- allows a command to send data to a slave and wait for the slave to return a value dependent on the data
  • Block Write/Read- allows the master to read or write large blocks of data, up to 20 bytes or more, from a slave device
  • Block Write-Block Read Process Call- a function that allows for the exchange of up to 255 total bytes of data between master and slave as part of a single function
  • SMBus Host Notify Protocol- enables communication with the SMBus host controller
  • Write 32 Protocol- A protocol for sending up to 32 bits of data from a master device to a slave device
  • Read 32 Protocol - A protocol for reading up to 32 bits of data from a slave device
  • Write 64 Protocol- A protocol for sending up to 64 bits of data from a master device to a slave device
  • Read 64 Protocol - A protocol for sending up to 64 bits of data from a master device to a slave device

Conclusion

SMBus has seen widespread adoption in basic electronics engineering due to its simplicity and low power consumption. For engineers building devices that use SMBus, Total Phase offers the Aardvark I2C/SPI Host Adapter and Beagle I2C/SPI Protocol Analyzer that are both capable for working with SMBus devices to help you save time and money as you build, test, and market your embedded electronic device.

Request a Demo

 

How Can I Program a Microwire Device with Specific Clock Cycles per Write Command?

$
0
0

Question from the Customer:

Is there a way I can use the Aardvark I2C/SPI Host Adapter to program a microwire device? Here is a summary of device requirements:

  • A specific number of clock cycles must coincide with the operation being sent.
  • The number of clock cycles is not a multiple of 8; this device is not constrained to a number of bytes being sent (like SPI devices).

For example, the write command has 29 clock cycles on the device being programmed. What complicates the programming is if the device does not detect the expected number of clocks, the operation is terminated. It looks like I cannot just pad the command with 0s.

What do you recommend?

Response from Technical Support:

Thanks for your question! Essentially, the Aardvark I2C/SPI Host Adapter interfaces with a microwire device in the same manner as a regular SPI device. The only difference is the labeling of the pins. We will start is a summary of how the Aardvark adapter works, then follow through with two recommendations.

How the Aardvark I2C/SPI Host Adapter Writes Data

  • The Aardvark adapter is not capable of sending data bit by bit, data is only sent in multiples of bytes
  • Each byte is sent in in 8 clock cycles.
  • The Aardvark adapter has td (Master: 7 to 9 µs; Slave: min 4 µs) between two bytes of 8 bits
  • There is approximately 9 µs of setup time required in between each byte, which results in a total transmission period of the byte transmission time plus 9 µs.

In summary, the Aardvark adapter can transfer up to 8 bits data without td delay. However, there will be a td delay when transferring 16 bits (or more) of data.

The delay between each byte of data sent is due to the characteristics of the Aardvark adapter. For more information, please refer to section SPI Signaling Characteristics of the Aardvark I2C/SPI Host Adapter User Manual

Using the Control Center Serial Software

Here is a basic example of writing two 8-bit bytes AB CD in the transaction log of the Control Center Serial Software, and then submit send: assert SS, send AB CD, de-assert SS.

In this case, there will be gaps in the middle, but it is still a single transaction with the slave select (SS) asserted at the beginning and de-asserted at the end. There will be a delay between each byte of data sent because of the signaling characteristics of the Aardvark adapter.

Aligning the Packet

You may be able to use padding, but it would occur before the first transmitted bit:

  • Pad the MSB with 0s at the front before the first transmitted 1. This action would “byte-align” the packet.

The caveat is this could work if the device is not affected by the 9 µs setup time between bytes. If that timing is an issue, then we have another recommendation.

More Programming Options with Promira Serial Platform

For your application, to send multiple bits in a single transaction, we recommend using the Promira Serial Platform. This device can be licensed with SPI Level 1 / SPI Level 2 / SPI Level 3 Application(s) as needed for your requirements.

The default is 32 bits per word. However, the word lengths can be configured by using the Promira Software API I2C/SPI Active to fit program in word lengths other than multiples of 8. This way, words would be transmitted without the 9 µs inter-byte delay of the Aardvark adapter.

The API supports multiple operating systems and programming languages. In addition, example programs are provided that can be used as-is or modified as needed. For more information, please refer to the section API Documentation of the Promira Serial Platform I2C/SPI Active User Manual.

We hope this answers your question. Additional resources that you may find helpful include the following:

If you want more information, feel free to contact us with your questions, or request a demo that applies to your application.

Request a Demo

How Individual Cable Testing Helps Uncover Underlying Cable Issues and Ensures Quality in Production

$
0
0

Creating safe and quality-made cables requires various evaluations and testing before landing in the hands of the consumer. To be certified, cables must go through design validation and compliance testing. But even after cable designs are approved, it is important that cables be tested for quality and safety standards during production. Learn the different ways cables are approved and tested and why individual quality control testing is just as crucial in catching underlying cable issues that might otherwise be overlooked in production.

Cable Standards Organizations

Cable standards organizations, including USB-IF, HDMI.org, VESA, and MFi, each have their own set of standards and oversee the development of cables looking to comply to specifications.

USB – USB-IF

USB-IF, or USB Implementors Forum, is the organization that oversees and administers the set of standards and specifications for all USB cables. The organization also administers the compliance testing procedures, which examines the cable’s design and functionality. Cables that pass compliance tests are able to use the USB certified logo on USB products.

HDMI – HDMI.org

HDMI.org is the organization that oversees and administers the set of standards and specifications for HDMI cables. Over the years, HDMI has released multiple new cable specifications and standards, each encompassing criterion for video quality and speed. HDMI.org administers cable compliance testing, where cables are examined against the cable specification for design, function, and performance. Cables that pass compliance tests are able to use the HDMI certified logo on HDMI products.

DisplayPort – VESA

VESA, or Video Electronics Standards Association, is the organization that established and continues to oversee the development of the DisplayPort interface. VESA has introduced multiple DP specifications over the years, with each new iteration offering higher bandwidth, faster speeds, and new capabilities. VESA administers cable compliance tests that evaluates the cables design and adherence to the cable specifications. Cables that pass compliance testing are able to use the VESA certified logo on DisplayPort products.

Apple Lightning – Apple MFi Program

The Apple MFi program, or Made for iPhone/iPad/iPod, is the official Apple licensing program for developers of hardware and software peripherals that work with Apple's iPod, iPad, and iPhone. This program administers cable design and performance testing for third party Apple Lightning cables to ensure they meet the required standards. If cables adhere to the MFi specifications, cable manufactures are able to use the MFi logo on Apple Lightning products.

Cable Design Validation and Compliance Testing

In order for cable manufacturers to have their cables approved and certified by the cable standard’s organization, they must undergo various evaluations including design validation and compliance testing.

Standards organizations perform design evaluations on cables in order to get an impression of the overall cable design including the connector type, type of wiring, and the cable length. Essentially, cable manufacturers are to submit a “blueprint” of their cable to have their designs validated. Additionally, cable manufacturers are required to submit a sample of the prototyped cable for review. If the design and cable pass, cable manufacturers are approved to move forward in production.

Cable compliance testing is also performed in order for cable manufacturers to acquire certification. Compliance testing consists of performing different tests on multiple facets of the cable including its pin and wiring configurations, power supply and consumption, and cable speed.

While these testing processes for USB, Apple Lightning, HDMI, and DisplayPort verify the cable design and performance, these tests are performed pre-production and do not take into account any variability and errors that occur during production.

Quality Control Testing Methods

Functional Cable Testing Method

Because compliance testing only goes so far in validating cable safety and quality, it is just as important for cable manufacturers to perform quality control testing on their cable during production. The functional testing method is commonly used in lab and production settings as a way to verify the cable is able to perform its function. This generally consists of manufacturers testing the cable with other devices to see if it works as intended, i.e. charging devices or transferring video/audio. While functional testing allows developers to determine utility, it does not verify that cable is up to standard and follows the cable specifications. For instance, if a cable is plugged into a phone and it charges, it appears to function, however, if the power negotiations and power supply and consumption levels are not within specification, this won’t be seen and can be dangerous for the consumer.

Statistical Process Control (SPC) vs Individual Quality Control (IQC)

Other cable testing methods include quality control testing, which can either be performed on a statistical or individual scale.

Statistical Process Control (SPC) is a testing method that allows manufacturers to statistically calculate the failure rate of their product. This is performed by taking a sample lot of the product, testing each lot, and calculating the likelihood that a certain proportion of the product is not up to safety and quality standards.

Individual Quality Control (IQC) is a testing method that allows manufactures to test each product coming off the production line. Rather than estimating the number of failures per lot, this allows manufactures to get a very accurate depiction of which cables meet the standards.

Both SPC and IQC have their pros and cons. While SPC is considered to be less expensive than IQC, the failure rate is only a statistical calculation, which means that there are opportunities for faulty cables to be shipped to customers. IQC, on the other hand, is considered to be a more expensive alternative that requires more resources to execute, but it allows cable manufacturers to be more confident that each cable going to consumers is safe and reliable, minimizing risk.

Why Individual Quality Control is Vital to Lab and Factory Testing

There are multiple ways for cable manufactures to ensure their cables are up to standard, all of which hold equal importance. Manufactures looking to comply with the cable specification and use the cable organization’s “certified” logo need to go through arduous compliance testing to be able to do so. While this approves the cable production process and design, it does not consider the variability that occurs during mass scale production.

Rather than just functional or SPC testing, individual quality control testing is necessary in cable manufacturing and should be performed in addition to design validation and compliance tests. Cable production is an involved process that requires human intervention due to its complexity of soldering pins and wires and assembling the different cable components. With human involvement, the opportunities for product variability rises steeply.

Testing each individual cable is important because cable manufacturers who have neglected to do so have experienced negative consequences that have been damaging to both consumers and the company’s brand.

Common Reasons for Underlying Cable Issues and Failures

Poor Wiring and Pin Alignment

Each cable connector, whether it be USB, HDMI, DisplayPort, or Apple Lightning, has multiple pins that perform specific functions including data signaling and power transfers. It’s important that cable connectors conform to the cable specification so they are interchangeable with other conforming cable connectors and ports with the same functions. If there is poor pin alignment and wiring, including shorts or opens, this can cause failed connections or damage to the devices.

Power Related Issues

Depending on the cable type, certain pins including GND and VBUS carry current across the cable through a resistor, resulting in outputs of DC resistance and voltage drop (IR drop). If the DC resistance and voltage drop measurements are either too high or too low, it can cause power related issues in the cable. The greater the resistance of the circuit, the higher the voltage drop. Low voltages to the powered equipment can cause improper operation to devices, waste power, and cause damage to the device. A high resistance connection may also produce higher levels of heat, which can potentially cause fires in certain environments. Verifying that these power measurements are within specification is important to avoid such dangerous situations.

Invalid E-Marker in USB Type-C Cables

An E-Marker, or electronic marker, is a chip within a USB Type-C cable that provides information about the cable’s parameters, including the data speed, current capacity, power delivery specifications, and other information on the Vendor ID and Product ID. For USB specifications that support higher power levels such as 5A, E-Markers are required so that connected devices can understand the cable’s capabilities. Well-constructed USB Type-C cables with E-Markers are expected to have the same capabilities as its advertised VDO data, so it is important to verify this the E-Marker is valid.

Poor Signal Quality

Poor signal quality prevents cables and devices from effectively communicating with each other. It can cause the cable to operate at slower speeds than supported, causing inefficient transfer of data, and can even attribute to signal drop-outs or no signal lock. Signal degradation can be a result of numerous occurrences, including poor wiring, electrical noise, crosstalk, or insertion loss. Ensuring an effective signal can be difficult since users cannot always see any deviations from the cable specifications with the naked eye. Having the ability to visualize the cable’s signal will help ensure data is being effectively transferred.

Apple Lightning Specific Issues

Apple Lightning cables are made specifically for Apple products and follow their own set of standards and specifications. These cables also have to follow certain lightning plug and power guidelines in order for the cables to meet MFi standards. Apple Lightning cables include additional ways to protect devices, but if not constructed properly, it may cause issues.

Quiescent Current Consumption

Quiescent current is described as the amount of current consumed by a circuit when it is not in use. A high quiescent current can be harmful to a battery within a device, as it can lessen its overall lifespan.

Over-Voltage Protection

Over-voltage protection helps prevent harmful level of voltages from being transferred to devices. Apple Lightning cables that don’t have adequate over-voltage protection may result in an overload of voltage to the device, which can leave the motherboard damaged or dead.

About the Advanced Cable Tester v2 Cable Testing Tool

The Advanced Cable Tester v2 is the next generation cable testing solution offered by Total Phase. Because it supports numerous cable types, it be configured as a USB cable tester, Apple Lightning cable tester, HDMI cable tester, and a DisplayPort cable tester. The Advanced Cable Tester v2 is able to test these cables against the cable specification in a matter of seconds and flag errors for easier debugging. It is an affordable cable testing solution that can test cables during cable development, as well as perform mass quality control testing in factory settings. It tests critical cable components for safety and quality including pin continuity, DC resistance, E-Marker validation, and signal integrity.

Pin Continuity Testing

The Pin Continuity test checks to see if pins from one connector are continuous with pins on the other connector. The report compares both ends of the plug to the expected plug values. If there are any deviations, a flag on the test will occur. This test helps detect shorts, opens, routings, and can help prevent dangerous occurrences like VBUS/GND reversals.

DC Resistance Testing

DC Resistance ("DCR") and IR Drop testing confirms that each power pin (VBUS and GND) is capable of carrying the required current to meet the applicable specification. For USB Type-C cables, each power pin is individually measured, then the cable as a whole is tested. If present, USB Type-C SBU and CC lines are also tested for DCR.

E-Marker Validation

The E-Marker test checks for the presence or absence of E-Marker chip(s) in a USB Type-C connector. The Advanced Cable Tester v2 supports Power Delivery Specification, Revision 2 (PD2) and Power Delivery Specification, Revision 3 (PD3).  If an E-Marker is found to be present, the tester will query the device to read all available Vendor Data Objects (VDOs). Properly constructed USB Type-C cables with E-Markers are expected to have the same VDO data. Any deviations from the expected values are flagged on the report.

Signal Integrity Testing

The Signal Integrity test measures the quality of the differentially paired wires through the cable and is configurable from 1030 MHz up to 12.8 GHz on up to 5 differential pairs. The signal integrity test performs a speed test of the cable and is accompanied by eye diagrams that provide a visualization of the quality of signal, including a mask to provide a reference for the HEO and VEO values. If the test achieves lock on the indicated differential pair, the eye diagram image will be displayed on the test. If no lock was achieved, a no-lock image will be displayed.

Signal integrity test for USB Type-C cable

Passing Signal Integrity test of USB Type-C to USB Type-C cable

 

Apple Lightning Specific Testing*

Apple Lightning testing ensures proper operation of the Lightning plug, including proper Lighting plug power-up, over-voltage, recovery, quiescent current consumption, and current limit testing per the Apple MFi specification. The Advanced Cable Tester v2 is the only tester available that can perform the tests required by the Apple Accessory Interface Specification (R31 and greater).

  • Lightning Plug - indicates what type of Lightning connector was found in the cable under test
  • Over-Voltage Protection - tests the cables over voltage protection
  • Quiescent Current - tests the current draw of the cable under different scenarios
  • Source Measurement Unit - tests the current limit of the cable in different scenarios with a source measurement unit (SMU)

*These test reports are only available to units that have been licensed for MFi members.

Advanced Cable Tester v2

Advanced Cable Tester v2

The Advanced Cable Tester v2 supports the following cable types:

  • USB Type-C to USB Type-C (Type-C 1.3, USB 3.2 and earlier specifications)
  • USB Type-C to USB Standard-A (USB 3.2 and earlier version specifications)
  • USB Type-C to USB Micro-B (USB 3.2 and earlier version specifications)
  • USB Type C to USB Standard-B (USB 3.2 and earlier version specifications)
  • USB Standard-A to USB Micro-B (USB 3.2 and earlier version specifications)
  • USB Standard-A to USB Standard-B (USB 3.2 and earlier version specifications)
  • Lightning (USB 2.0) to USB Standard-A
  • Lightning (USB 2.0) to USB Type-C
  • HDMI Type A to HDMI Type A (up to 12.8 Gbps per channel)
  • DisplayPort to DisplayPort (DisplayPort 2.0 and earlier version specifications)

Watch the Advanced Cable Tester v2 in action:

 

Have further questions on the Advanced Cable Tester v2 and how it can be implemented in your operations? Schedule a demo with us to learn more.

Request a Demo

How Can I Monitor and Store Captured SPI Data Over Long Evaluation Periods and Test Runs?

$
0
0

Question from the Customer:

I am looking for tools to use with an upcoming project. The tasks include extended evaluation periods that can last up to five days.  There are multiple SPI buses to monitor with speeds that vary from 1-2 Mbps. Although the periods can be lengthy, the data transactions will not be continuous; I estimate the data log per session will be 50 GB or less. My questions:

  • Which Total Phase tools would work best for this application?
  • I assume I will need one tool for each bus – can they log together on the same lab computer?
  • What are the constraints I need to know about logging this data? Available memory? Bus speed?

Response from Technical Support:

Thanks for your questions! For your project, we recommend the Beagle I2C/SPI Protocol Analyzer, which monitors SPI data non-intrusively. We also have two recommendations for storing the captured data, which are described below.

Monitoring the SPI Bus and Storing Data

Depending on the amount of data and the available memory on your machine, captured data can be stored via RAM or disk drive.  In addition to storing data, using the Data Center Software you will be to observe real-time data capture and display – seeing packets as they occur on the bus. Bit-level timing is available down to 20 ns of resolution. To quickly become familiar with the Beagle I2C/SPI analyzer, we recommend starting with the Beagle I2C/SPI Protocol Analyzer Quick Start Guide. Afterwards, you can use the Data Center Software or Beagle Software API to start monitoring your SPI devices.

You will need a separate Beagle I2C/SPI analyzer for each bus.  Each analyzer can be run from the same computer. The maximum number of analyzers depends on how many devices the computer can support on its USB bus.

Storing SPI Data using Data Center Software

Here is an overview of using the Data Center Software.

The Data Center Software streams the capture to the RAM on your PC. In this case, the capture limit is not determined by the disk space. Instead, it is determined from the available RAM space. The Capture Control dialog displays the Software Capture Buffer, which indicates the total amount of memory available in the Analysis PC. This Capture Data Limit can be adjusted in the Capture Settings menu.

By default, the capture limit is set to 50% of the available memory. The upper limit for capture is 80% of the available memory. On an extremely busy computer, the capture limit should be set lower. For example, if the application starts swapping memory, incoming capture data may be lost. We suggest targeting the maximum possible RAM to be able to achieve the best results.

Controlling the Data Center Software via Remote Terminal

With CLI (command line interface) commands and a remote console, you can control the Data Center Software from an external process, a script that controls Data Center Software. This enables a telnet terminal for other applications (or computers) to connect to and control the actions. For more information, see section 5.1.5 Command Line Options of the Data Center User Manual.

You can also create a script to automate tasks that can be run from a remote computer. An example is described in Controlling Data Center Software with a Remote Terminal and a Python Script. In this article, a Python script is used to capture and save data over a specific period: 3 seconds. A similar script could be used to export data as well. An example is described in the following subsection.

Capturing Data from a Remote Computer

Capturing data with a Python script:

Here is an example of capturing data with a remote computer. The computer that connects to the Beagle I2C/SPI analyzer is PC-1. The computer that runs the script is PC-2.

  1. Connect PC-1 with the Beagle I2C/SPI analyzer, and configure it with Telnet.
  2. Configure a remote PC-2 with Telnet, and open Telnet terminal to access PC-1.
  3. In PC-1, go do ...\data-center-windows-x86_64-v6.73\bin and run the datacenter.cmd as follows in command line: > datacenter.cmd -r 6000. From the above step, users can control the Data Center application when they cannot physically be in front of the machine running the application.
  4. In PC-2, run the script.

We have a Python script that you can easily modify for this use. This script is programmed to start capture at 8 pm and stop capture at 5 pm. You can alter the programmed timing as needed.

To do so:

  1. Replace localhost with the IP address of PC-1: line no.64 (tn = telnetlib.Telnet('localhost', 6000))
  2. Replace save path to the desired path in PC-1: line no.73 (SAVE = "save(u'/tmp/foo_%s.tdc', {'no_timing': False, 'filtered_only': False}, True)")

You will also need to modify this script to work with the Beagle I2C/SPI analyzer, as this example was originally created for the Beagle USB 12 Protocol Analyzer.

Looping the script:

You can loop the script to continuously capture and save data.

  1. Start capture.
  2. Run capture for a specified time.
  3. Stop capture.
  4. Save data to a file.
  5. Loop back to step 1.

Please note: saving data takes time; the longer the capture period, the more data is collected, and the more time is required to save data. A loss of data may occur in step 4. One possible workaround is extending the capture period and creating a “time buffer” for saving data.

A more detailed script can be created with API.

Storing Data via API Scripts

The Beagle Software API provides the means to create you own application.  The API supports multiple operating systems and programming languages.  Functional scripts are provided with the API that can be used as-is or modified as needed. The script entitled capture_usb480.c, which uses the Beagle USB 480 Protocol Analyzer to capture and store data to a hard drive, may be useful for your application and can be modified per the requirements of your setup and tools.

We hope this answers your questions. Additional resources that you may find helpful include the following:

If you need more information, feel free to contact us with your questions, or request a demo that applies to your application.

Request a Demo


The Future of the Internet of Things in 2020

$
0
0

The Internet of Things (IoT) has become one of the largest areas of technological advancement. In the past decade, internet connectivity has been extended from mainframes to mobile smart phones. Now, with the IoT, internet connectivity is present in many types of devices that we interact with on a daily basis, including home appliances, vehicles, machines, and consumer products.

In this week's blog post, we're taking a look at how trends in the IoT are shifting as we enter a new decade. With so many big changes going on, our goal is to understand how the IoT has evolved up to this point and to hopefully predict the future of the internet of things in 2020.

What Does the Future of the Internet of Things Look Like?

Billions of People Will Use the Internet

One of the most important things to realize about the future of the IoT is that it will be closely connected to the global development of other requisite technologies, especially the internet and mobile devices. As the internet expands its global reach, and especially with the possibilities of Skynet, Elon Musk's plan to blanket the Earth with internet service from space, new market opportunities will emerge for IoT devices. Thanks to the available data, we can see exactly how global internet usage has grown over time.

In 2005, the world population was 6.5 billion people with just 16% of them able to connect to the worldwide web, or roughly 1 billion users. This formed a strong divide between the developed world, where 51% of the population had access, and the developing world, where just 8% of the population were counted as internet users. By 2017, the global population had reached 7.4 billion and the number of internet users had increased to 48% of all individuals, or around 3.5 billion users. This included 81% of individuals in the developed world and nearly 42% in the developing world.

In addition 45% of the world's population now own mobile smart phones, which can act as an interface to control and interact with IoT devices. Mobile devices and the internet have followed similar trends in user growth over the past decade, so it's no surprise that IoT devices are following suit. Between 2014 and 2020, the number of businesses that use IoT devices grew from 13% to 25%, and Gartner says that 5.8 billion IoT devices will be deployed in various applications in 2020.

Everything Will Be on Cloud

IoT devices are frequently equipped with sensors that collect data at the network edge and upload it to centralized databases where it can be processed, transformed, stored, and analyzed. The quantity of data that can be generated by a network of IoT endpoints is staggering. Analysts at International Data Corporation are estimating that by 2025, IoT devices will produce roughly 80 zettabytes of data per year - that's 80 sextillion bytes of data, or  around 80 billion terabytes.

As the current leading paradigm for big data processing and analytics, the cloud will play a major role in the future of IoT devices. Cloud computing essentially means that individuals and organizations can store large quantities of data on internet-accessible servers, instead of on their local devices. This makes it easy for just about anyone to obtain low-cost access to data storage and computational resources that are needed to manage and analyze IoT data, even in real time.

Cloud computing provides the pathways as well as the physical and virtual infrastructure for transporting large amounts of data from billions of endpoints into centralized data storage locations and processing that data. Cloud computing can be used to dynamically allocate computing resources to process data, enabling real-time analytics for IoT data. Cloud computing also helps reduce the costs associated with deploying large numbers of IoT devices.

Technology

Image courtesy of Unsplash

Future IoT deployments will continue to rely on robust data centers to dynamically manage the storage and processing of data generated by IoT devices and sensors in the field.

IoT Security will be Emphasized

While the cloud will serve as one of the major enablers of IoT technology through 2020 and beyond, IoT security represents something of a stumbling block that will have to be surmounted as we move into the future.

The issues that surround digital security of IoT devices can be summarized in relatively simple terms:

First, imagine the front door of your house. You want to prevent intruders from coming through the door, so it's probably made out of steel, securely installed and somehow locked. A physical lock requires a key, meaning that a potential thief would have to obtain the key (or a replica) to get in. Some home-owners have replaced the normal lock-and-key system with keyless entry, for example, a number pad where you enter a code to gain access.

Now, with the IoT, home-owners can install a device that allows them to lock or unlock the door remotely using a smart phone. On the surface, this is a great idea: thieves can no longer steal your access code or copy your key. On the other hand, a more sophisticated thief with some technical chops might be able to steal and duplicate the electronic signal that controls the locking mechanism. This would give them unfettered access to lock and unlock the door as desired. This scenario reflects the current reality of the IoT - the capabilities are there, but there is a clear and immediate need to enhance device security and encryption to prevent these types of breaches.

In a sense, many of the problems solved by IoT devices will be replaced with IoT security problems - many of which still need to be solved. Going forward, we expect to see a significant amount of effort invested into IoT security solutions. IoT products for both home and business will need to be effectively secured against data theft and other types of malicious attacks. The future of the Internet of Things in 2020 will depend on how effectively we learn to secure IoT devices.

Developing Countries Will Experience Growth

Research by consultancy firm McKinsey and Co. suggests that by 2020, developing countries would account for 40% of the total value of the IoT device market. To understand why, we need to look beyond web-connected devices like smart cars and refrigerators and think about how IoT devices can be used to interconnect the infrastructure that shapes our world.

Developing countries need infrastructure investments. They need agriculture, education to develop their human resources. They need strong health care systems. They need industry and manufacturing to supply people with goods. They need roads and bridges to enable the transportation of people and goods, and all of these processes and areas of development can be supported by the IoT.

Summary

The year 2020 represents the commencement of an exciting decade for development in the IoT. As internet user-ship continues to grow, we expect the growth of the IoT to accelerate. While there are various estimates for how many devices and how much data will be produced, we know that most of the growth is going to happen in the developing world where the IoT can make a huge impact in many areas of social and economic development. We also expect the cloud to play an enormous role in enabling many of the benefits of the IoT, while IoT security will pose a major challenge over the coming decade.

 

Can I Automate Flashing S-Record Files to a Chip?

$
0
0

Continue reading

Amazon’s Electronics are Catching Fire? Here’s How This Can Be Prevented.

$
0
0

With the electronics industry booming and the demand for cables, cords, and power chargers on the rise, many companies, including Amazon, have been supplying the high demand for such items. Amazon’s own private brand, AmazonBasics, has become a substantial part of their retail business and is continually growing, offering numerous tech accessories including cables, chargers, surge protectors, and more.

A recent article by CNN, “Dozens of Amazon's own products have been reported as dangerous -- melting, exploding or even bursting into flames. Many are still on the market”, describes several hazardous accounts consumers have experienced while using Amazon-supplied tech accessories, many of which include fire related incidents, such as devices exploding, bursting into flames, and melting.

Although many cable manufacturers and suppliers, including Amazon, go through their own vetting processes to help ensure products are safe to use, faulty and hazardous cables are still making their way through.

Consumers Report Exploding, Melting Devices

While Amazon has been under scrutiny for supplying faulty and unsafe cables and other electronic products in the past, Amazon has come into the spotlight once again as numerous consumers have reported unsafe and even hazardous experiences with products including their USB and Apple Lightning cables.

phone catching fire while using surge protector

Phone catching fire while charging.

The CNN article opens with an incident from one Amazon customer, Austin Parra, who had experienced a dangerous and life-threatening situation while using a USB cable purchased from this retailer. Parra reported that the cable being used to charge a cell phone short circuited and "the heat produced by the cord ignited the upholstery for the office chair," all while he was asleep. Parra was later taken to the hospital and was left with second-degree burns. In addition to this case, there are hundreds of other accounts of customers similar incidents.

Many verified customers have left reviews on the products themselves; one in particular shared their own experience with an Apple Lightning cord heating up to the point where it melted and ruined their iPhone, stating "DO NOT BUY! FIRE HAZARD! "These should be taken off the market immediately!!!"

Quality Testing Beyond Cable Certification

Consumers reporting faulty cables and companies recalling cables is nothing new. Other cable manufacturers and suppliers have also dealt with the repercussions of supplying flawed cables that ended up in the hands of consumers despite the safety and quality testing measures in place. Amazon in particular has reported that “safety is a top priority,” stating that they vet their manufacturers carefully and test the products for safety and compliance standards before and after they are sold. With all of these measures in place, why are customers still experiencing such issues?

Electrical engineers who have provided their expertise on the matter noted that some of these issues reported by consumers can be attributed to “user error or other external factors,” however, if there are multiple reports on the same product, it is likely due to the design or manufacturing process.

Additionally, the article mentions that USB-IF, the organization responsible for enforcing the standards and specifications of USB devices, believes that cables over-heating is likely not due to user error. It states, “A cable that is substandard, whether because of a design or manufacturing defect, can be dangerous and lead to electric shock, overheating, sparks or fire.”

While USB-IF has certified a number of Amazon’s cables, they noted that certification is not enough to ensure a safe and quality-made cable, and that they focus on “the functionality of the cables and making sure their specifications are in compliance.” USB-IF also expressed that certification is “not a replacement for industry best practices or any applicable local, state or government statutes, rules or regulations pertaining to safety."

Preventing Dangerous Cables from Coming Off the Production Line

Cables that undergo faulty design and manufacturing processes are more at-risk to dangerous consequences, including short circuiting, over-heating, and overdrawing current; all of these can lead to hazardous circumstances. Therefore, electronic manufacturers, especially those fabricating cables that supply high amounts of power, need to be wary of their development processes and ensure products are being made properly during production.

Cable manufacturing is a complex process which should not stop at obtaining certifications from the standard’s organizations, including USB-IF, Apple MFi, VESA, and HDMI.org. Like noted above, even these organizations advocate continuing safety and quality checks throughout the development and manufacturing process, as these processes aren’t foolproof and often lead to variability within the product. Many manufactures are already familiar with this and do have testing measures in place. However, even with “functional” testing and statistical process control (SPC), these may not be sufficient in preventing those faulty cables from making their way to big box retailers.

As noted in the article, a small percentage of customers reported fire related issues while using Amazon’s branded cables, but even these cables caused undeniable issues much bigger than the cable itself. Just one faulty cable can be responsible for endangering users’ wellbeing, ensuing legal obligations, and damaging the company’s brand.

So, what is the most reliable way to ensure each cable being sold to consumers meets safety and quality standards? By performing individual quality control on each cable coming off the production line.

Affordably and Quickly Test Cables using Advanced Cable Tester v2

Individual quality control is a testing method that ensures each product is tested during and after production to ensure it meets the necessary safety and quality standards. The Advanced Cable Tester v2 is a cable testing solution designed for this purpose and supports mass testing in lab and factory settings. With just pennies per test and results in under 10 seconds, testers can more easily and affordably gather insight into their cables and whether or not they meet the relevant specification and other safety concerns. The Advanced Cable Tester v2 supports multiple different cables types and comprehensively tests several cables components including Pin Continuity, DC Resistance, E-Marker validity, Signal Integrity, and other Apple Lightning specific tests, if applicable. With any failing cable, the detailed report will flag any issues and their sources for easy debugging.

 

Total Phase’s Advanced Cable Tester v2 provides comprehensive testing for USB Type-C to Lightning cables.

Advanced Cable Tester v2 supports a variety of USB cables, Apple Lightning, HDMI, and DisplayPort cables.

For a quick demonstration on how the Advanced Cable Tester v2 quickly tests USB, Apple Lightning, HDMI, and DisplayPort cables, watch the video below:

To learn more about this product and the cables supported, please visit the Advanced Cable Tester v2 product page.

Which Aardvark I2C/SPI Host Adapter Pins Can Be Made Available for GPIO Signals and How Are They Controlled?

$
0
0

Question from the Customer:

I am starting to use the Aardvark I2C/SPI Host Adapter and I have some questions about the GPIO pins, including how to program them with your software. I am using the Aardvark adapter with an I2C device.

  • Can you provide guidelines for configuring Aardvark adapter's MISO pin as an output and setting the logic level?
  • For GPIO direction control, I only want to control pins 5 and 7-9, and leave pins 1 and 2 for normal operation. However, my understanding is the aa_gpio_direction function requires an output for pin 1 and pin 2. How can I achieve this or is there another setting to use that does not interfere with normal I2C bus operations?

Response from Technical Support:

Overview of Aardvark Adapter Signal Pins

The Aardvark I2C/SPI Host Adapter is compatible with both 3.3V and 5V signal levels out of the box. The I2C bus is open-drain and the Aardvark adapter contains pull-up resistors for the SCL and SDA lines. These lines are pulled up to 3.3V. We will go over controlling the signals using Aardvark Software API.

Programmable GPIO Pins

The Aardvark GPIO mode has six GPIO signals:

  • Pin 1 - GPIO SCL signal
  • Pin 3 - GPIO SDA signal
  • Pin 5 - GPIO MISO signal
  • Pin 7 - GPIO SCK signal
  • Pin 8 - GPIO MOSI signal
  • Pin 9 - GPIO SS signal

Which pins you can program is related to which mode the Aardvark adapter is enabled in:

  • When the Aardvark adapter is enabled in I2C mode, the four SPI pins (pins 5, 7, 8, 9) are used as GPIO pins. These are the pins that you can program with your setup.
  • When the Aardvark adapter is enabled in SPI mode, the two I2C pins (pin 1 and 3) are used as GPIO pins.
  • In addition to enabling the Aardvark adapter for I2C or SPI modes, there is also a GPIO only mode. In this setting, both SPI and I2C pins are available as GPIO, for a total of six pins.

For example, enabling the Aardvark adapter in I2C mode via aa_configure(handle, AA_CONFIG_GPIO_I2C) allows SPI pins 5, 7, 8 and 9 to be controlled with API calls. The I2C signal pins cannot be used as GPIO pins.

Aardvark Adapter Signals per Mode

I2C Mode Pins:

  • SCL (Pin 1): Serial Clock line
  • SDA (Pin 3): Serial Data line

SPI Mode Pins:                                                                    

  • SCLK (Pin 7): Serial Clock
  • MOSI (Pin 8): Master Out Slave In
  • MISO (Pin 5): Master In Slave Out
  • SS (Pin 9): Slave Select

GPIO Mode Pins:

  • In GPIO mode, all six of the above I2C and SPI pins are available as GPIO pins.

Setting I/O Logic Levels

GPIO pins can be configured for input signals or output signals. For input signals, the internal pull-ups can be enabled for disabled.

  • When a GPIO bit is configured as an input, the signal pull-up can be enabled or disabled.
  • If GPIO bit is configured as an output, and the value 1 is writing to that bit, then the signal will be logic 1.
  • If GPIO bit is configured as an output, and the value 0 is writing to that bit, then the signal will be logic 0.
  • If GPIO bit is configured as an input, and logic 1 is supplied to the signal, then the bit is read logic 1.
  • If GPIO bit is configured as an input, and logic 0 is supplied to the signal, then the bit is read logic 0.

Configuring GPIO Pins using API

When using the handle aa_configure(handle, AA_CONFIG_GPIO_I2C),  the Aardvark adapter is set with GPIO+I2C configuration in which the SPI pins (pins 5, 7, 8, 9) are used as GPIO pins (GPIO# 02, 03, 04, 05).

The table below shows the corresponding GPIO number, pin number, and value. Please note, all six pins that could be used as GPIO are listed. The pins that apply for your I2C configuration are 5, 7, 8, and 9.

Table of the mask values per GPIO pin GPIO Pinout and Mask Values

The values are assigned per bitmasks of six GPIO pins starting from GPIO00 to GPIO05.

Programming Pin I/O with Bitmasks

Setting Direction:

print aa_gpio_direction(handle,0x0C)

For setting direction, when a line's bit is 0, the line is configured as an input. Otherwise, that will be an output. The value passed is 0x0C which sets GPIO00 and GPIO01 as input lines; this is not valid as these are I2C lines, not GPIO lines. Other available GPIOs (GPIO04 and GPIO05) are inputs. GPIO02 and GPIO03 are configured as output lines.

Setting Logic:

aa_gpio_set(handle,0x0C)

The 0x0C value is a bitmask that specifies which outputs are set to logic high and which outputs are set to logic low. With the value 0x0C, GPIO# 02 and GPIO# 03 are set to logic high.

We hope this answers your questions. Additional resources that you may find helpful include the following:

If you need more information, feel free to contact us with your questions, or request a demo that applies to your application.

Request a Demo

Control Center Software Series: General Purpose IO

$
0
0

The Total Phase Control Center Serial Software provides access to I2C and SPI functionalities of the Aardvark I2C/SPI Host AdapterCheetah SPI Host Adapter, and Promira Serial Platform, including reading and writing messages, XML batch scripting, and more. Additionally, this software allows users to access and interact with the GPIO functionality that is offered with the Aardvark adapter and Promira platform specifically. The GPIO functionality can be combined with I2C or SPI or it can be used as a standalone, which offers users flexibility when developing and testing devices as GPIOs bring additional uses for pins not otherwise natively available. In this article, we’ll provide more insight into the GPIO functionality that is offered within the Control Center Serial Software.

What is a GPIO?

For users new to General Purpose Input/Output, or GPIO, it is a digital signal pin located on an integrated circuit that can be configured as an input or output depending on the application. Unlike other pins that have a dedicated purpose, the GPIO pins can be customized by hardware and software developers for use with a variety of different purposes. Developers can use GPIO pins to connect microcontrollers to other devices such as sensors, LEDs, or system-on-chip circuits.

GPIO in the Control Center Serial Software

Within the Control Center Serial Software, there are six operational modes supported including I2C + SPI, I2C + GPIO, SPI + GPIO, GPIO Only, Batch Mode, and Multi SPI I/O. Depending on the device and selected mode, different modes will appear in the main display. In this article, we’ll focus on using the GPIO related modes.

The “GPIO Only” mode allows users to take advantage of the six GPIO pins available on the 10-pin header of the Aardvark adapter and Promira platform.  Those looking to combine I2C or SPI with GPIO concurrently can choose the “I2C + GPIO” or “SPI + GPIO” mode options.

When GPIO is combined with either I2C or SPI, only the unused pins are available for GPIO. For example, when using I2C + GPIO, only the SPI pins (5,7,8,9) are available for GPIO and when using SPI + GPIO, only the I2C pins (1,3) are available for GPIO.

When using the Promira Serial Platform, up to eight GPIO pins are available in the software. The Promira platform can be licensed for up to 16 GPIOs when using the 34-pin connector.  The remaining available GPIO pins can be utilized through the Promira API.

The figures below provide examples into the GPIO-related modes while using an Aardvark I2C/SPI Host Adapter. Pin assignments for using the Aardvark adapter pins for GPIO include:

  • GPIO SCL - signal pin 1
  • GPIO SDA - signal pin 3
  • GPIO MISO - signal pin 5
  • GPIO SCK - signal pin 7
  • GPIO MOSI - signal pin 8
  • GPIO SS - signal pin 9
Control Center Serial Software GPIO Only Mode GPIO Only Mode for Aardvark I2C/SPI Host Adapter

 

Control Center Serial Software SPI and GPIO Mode GPIO mode when using SPI + GPIO for Aardvark I2C/SPI Host Adapter

 

Control Center Serial Software I2C and GPIO Mode GPIO mode when using I2C + GPIO for Aardvark I2C/SPI Host Adapter

GPIO Parameters

When GPIO mode is selected, only the available pins are displayed in the window. The parameters of each pin can be set by the user.

Parameters of the mode include:

  • GPIO#: Number of each GPIO signal
  • Pin#: Pin position depending on t he10/34-pin socket connector for the Promira platform or in the 10-pin connector for the Aardvark adapter
  • Value: The bit value of each pin signal

The pins have the following values:

Signal Aardvark Promira
SCL 0x01 0x01
SDA 0x02 0x02
MISO 0x04 N/A
SCK 0x08 N/A
MOSI 0x10 N/A
SS0 0x20 0x04
SS2 N/A 0x08
SS1 N/A 0x10
SS3 N/A 0x20
SS4 N/A 0x40
SS5 N/A 0x80
  • Direction: Whether the GPIO pin is in or out
  • Pull-Ups: Whether a pin in active or inactive
  • Out Set/Out Value: “Out Set” arranges the levels of the output pins where 0" and "1 are accepted. “Out Value” indicates the last known values of the output pins.
  • In Value: Indicates the last known values of the input pins

Transaction Log and Batch Mode

By using the Transaction Log feature in Control Center Serial Software, users can review all I2C, SPI, and GPIO transactions. GPIO transactions are categorized under “Mod.” and are displayed in gray.

Transaction Log in Control Center Serial Software

Users can also utilize the Batch Mode feature to easily configure GPIO functionality within a system. GPIO commands for XML batch scripting include configuring the GPIO interface, getting the value of current GPIO inputs, and setting the value of current GPIO outputs. More on the GPIO commands can be found here.

Host Adapters that Support GPIO Functionality

Promira Serial Platform

The Promira Serial Platform is a versatile and powerful host adapter that supports I2C and SPI protocols. Through its field upgradeable design, users can select from different application levels, which provide access to varying speeds, the number of supported GPIO pins, and more. Depending on the application and level, this tool can support from six to sixteen GPIO pins that also include pins normally used for I2C to send and receive signals. The specific pin configuration can be found in the Promira I2C/SPI Active User Manual. The GPIO functionality can be combined with either I2C or SPI or can be used by itself.

The number of supported GPIOs per application include:

Promira Serial Platform

Aardvark I2C/SPI Host Adapter

The Aardvark I2C/SPI Host Adapter is a general-purpose host adapter that supports I2C and SPI communication. This tool offers GPIO functionality and allows users to use the six pins that are normally used for I2C and SPI to send and receive signals. These six pins are SCL, SDA, MOSI, SCLK, MISO, and SS. The GPIO functionality can be combined with either I2C or SPI or can be used by itself.

 

Aardvark I2C/SPI Host Adapter

For further details on the GPIO functionality within Control Center Serial Software, please visit the Control Center Serial Software user manual or contact us at sales@totalphase.com.

Viewing all 822 articles
Browse latest View live