Quantcast
Channel: Total Phase Blog
Viewing all 822 articles
Browse latest View live

How Do I Identify the CAN Interface When I Have Four Komodo Can Duo Interfaces Connected to the Hub?

$
0
0

Abstract of hub data
Question from the Customer:

I am using a Windows computer to communicate to four Komodo CAN Duo Interfaces through a USB hub. I am running eight separate targets. How do I know which Komodo interface is handling which target? My problem – the Windows device manager associates a different port number for each Komodo channel each time I plug in a new device. How can I identify which Komodo channel is transferring data?

Response from Technical Support:
Thanks for your question! The latest Komodo CAN Duo Interface that is attached at with the host machine is allocated with USB port 0, and the original device is allocated the next port.

  1. Komodo channel A is connected to the host machine and is allocated port number 0.
  2. Komodo channel B is next connected to the host machine, and is allocated port number 0. Komodo A is then allocated port 1.

 

Get the Unique ID of the Komodo Interface

To have a consistent handle for each Komodo device, we recommend using Komodo Software API. Our API is compatible with Windows, Linux and MAC operating systems, and supports multiple programming languages. In addition, functional program examples are included, which you can use as is or modify for your specific requirements. There are two API commands for locating a device, km_find_devices and km_find_devices_ext.
 

Use API to Find Assigned Ports

For your requirements, to determine the ports to which the Komodo interfaces have been assigned, we recommend using the command km_find_devices_ext: it returns the unique ID of the Komodo interface and its allocated port number. With that information, you can then modify your code to select the handle of the specific target based on the unique ID of the channel Komodo interfaces. The other command, km_find_devices, only identifies the allocated port number. Here are the details of command km_find_devices_ext:

Find Devices (km_find_devices_ext)
  int km_find_devices_ext (int num_ports,

u16 *ports,
int num_ids,
u32 *unique_ids);

Function

Get a list of ports and corresponding unique IDs, through which Komodo devices can be accessed.

Arguments

num_ports               maximum number of ports to return
ports        array into which the port numbers are returned
num_ids                   maximum number of unique IDs to return
unique_ids               array into which the unique IDs are returned

Return Value

This function returns the number of ports found, regardless of the array size.

Specific Error Codes

None.

For more information, please refer the API Documentation section of the Komodo CAN interface User Manual.

Additional resources that you may find helpful include the following:

Here are some articles about using API with the Komodo Interface:

We hope this answers your question. If you have other questions about our CAN interfaces or other Total Phase products, feel free to contact us at sales@totalphase.com . You can also request a demo that is specific for your application.

Request a Demo


Which USB Cable Tester Fulfills Production Test Requirements?

$
0
0

Question from the Customer:

We have been using the Advanced Cable Tester - Level 1 Application with the Promira Serial Platform, and it’s been successful for debug and development. However, we are now setting up a production test system for USB Type-A to Type-C cables. In addition to needing to connect to both ends of this cable, because this system will be on the production floor, we need the device to handle up to 10,000 insertions. Looking at specifications, the cable tester we have is only specified for up to 5,000 insertions.

Is there a way we can use this cable tester on the production floor? Is an adapter available to fulfill those two requirements?

Response from Technical Support:

Thanks for your questions! The Advanced Cable Tester is a great tool for debugging cable issues in design and development. However, to fulfill the requirements you mentioned, including 10,0000 insertions for production usage, we recommend the Advanced Cable Tester v2.

Total Phase’s Advanced Cable Tester v2 provides comprehensive testing for USB Type-C to Lightning cables.

The Advantages of Using the Advanced Cable Tester v2 on the Production Floor

The Advanced Cable Tester v2 is durable and cost effective. It is designed specifically for a manufacturing environment and built to easily run tests with accurate results for a wide range of cables.

Ease of Use on the Production Floor

The Advanced Cable Tester v2 has an embedded LCD screen and an audible alarm that indicate pass/fail results. This simple pass/fail indicator makes it easy for anyone on the test floor to quickly and efficiently test cables. It also means the units can be operated without a computer nearby and without needing to interpret a test report.

Additionally, the Advanced Cable Tester v2 can store up to 1,000,000 test reports locally on the device. The test reports are logged automatically so there’s no need to worry about losing test data if the device is powered off.

A new feature on the Advanced Cable Tester v2 includes the ability to add custom tags to a test profile. This allows for tracking of a lot or batch number and other relevant test details. There’s also a feature for scanning serial number bar codes that allow for cable traceability back to the test report.

Supported Cable Testing

With its interchangeable modules, the Advanced Cable Tester v2 supports testing a large variety of cables including USB Standard-A to Type-C. Each module is rated for 10,000 tests. Here is a list of cable types that are supported today.

USB Cables

  • USB Type-C to USB 3.1 Standard-A
  • USB 3.1 Standard-A to USB 3.1 Micro-B
  • USB 3.1 Standard-A to USB 3.1 Standard-B
  • USB Type-C to USB Type-C
  • USB Type-C to USB 3.1 Micro-B
  • USB Type-C to USB 3.1 Standard-B

Lightning Cables

  • USB 3.1 Standard-A to Lightning USB2
  • USB Type-C to Lightning USB2

Video Cables

  • HDMI-A to HDMI-A
  • DisplayPort to DisplayPort

As the cable interface is interchangeable, the system can be kept up to date as new modules are released to support new cable designs as long as the firmware is kept up to date as well.

The removable connector modules also make the for lower cost per test. At volume, testing is as low as $0.02 per test making the Advanced Cable Tester v2 better suited to a high volume production test environment than its predecessor.

Advanced Uses in the Lab

The Advanced Cable Tester v2 does more than display pass/fail results. The comprehensive set of tests includes pin continuity, DC resistance and resistors measurement, E-Marker verification, and signal integrity. With these tests, errors and discrepancies that are not aligned with the cable specifications can be uncovered, as well as help determine the source of failure.

For details, please see Cost Savings and Benefits of Cable Testing with the Advanced Cable Tester v2.

Additional resources that you may find helpful include the following:

We hope this answers your question. Need more information? You can contact us and request a demo that applies to your application, as well as ask questions about other Total Phase products.

Request a Demo

What is DisplayPort Certification and Why is Individual Quality Control Just as Important?

$
0
0

DisplayPort (DP) is a digital interface that transmits audio and video signals. DisplayPort was created to replace DVI and VGA cables in exchange for a higher performing standard that could transfer high quality video and audio signals, as well as offer other features that could allow for better interoperability between advancing technologies.

Video Electronics Standards Association, or VESA, established and continues to oversee the development of the DisplayPort interface, now one of the most widely adopted video display standards in the electronics industry alongside HDMI. VESA has introduced multiple DP specifications over the years, with each new iteration offering higher bandwidth, faster speeds, and new capabilities. For more information on the latest DisplayPort 2.0 spec, check out our blog, “DisplayPort 2.0 is the Latest DisplayPort Spec – How does it Compare to DisplayPort 1.4?”.

To ensure that cable manufacturers are producing up-to-standard DisplayPort cables and other devices, VESA offers a compliance program. The DisplayPort compliance program involves a rigorous set of tests that can typically be performed at an authorized testing center (ATC) or through self-testing methods.

How do Products Obtain DisplayPort Certification?

In order for devices to become DisplayPort certified, manufacturers must submit their product, whether this be a source, sink, media, or cable or adapter, to an authorized testing center and it must pass various tests; these tests include a physical layer test, link layer test, interoperability test, EDID test, Multi-Stream Transport (MST) test, HDCP test, and an HDCP 2.2 test if supported.

Authorized Test Centers will typically test the product using a variety tools, and once a product has determined to past these qualifying tests, manufacturers are permitted to use the compliance logo.

DisplayPort Certified Logo

Is Product Certification Enough?

Design certification is necessary to determine conformance to DP specifications, but what about post-certification production line testing to measure quality outputs?

Compliance tests can catch design errors, but it cannot always prevent inconsistencies occurring on the production line, because it is a different ball game altogether. Cables produced on a mass-scale will not always adhere perfectly to the prototyped cable that was tested for compliance. Without a cable testing solution to perform individual quality control on each and every cable coming off the production line, even a certified cable could be defective when it lands in the hands of consumers.

Manufacturers of DisplayPort cables, certified or not, should be verifying cables are up to spec during and after production because variability and human error are factors that influence the finished product. Without any intervention, producing untested cables can lead to bad brand recognition, expensive recalls, and can even be dangerous if cables are used for medical or military purposes.

While many cable manufacturers do perform some type of testing on their products, it is not always sufficient and does not always cover all bases. Statistical Process Control (SPC) is a quality control method that tests random batches of cables to determine the likelihood of the passing rate of all cables. This method is often lower in cost, but is not as accurate as 100% individual quality control, and can allow for unreliable cables to pass inspection.

Video cable manufacturers also often perform functional testing to determine if the cable can simply transmit video and audio, but with the naked-eye, testers cannot easily determine conformance to the cable specification. There are numerous components within the cable that are required to meet certain measurements, including whether pins are properly aligned and whether or not the cable’s signal meets the acceptable bit rate.

Individual Quality Control is Vital to Cable Production Operations

How can cable manufacturers go beyond cable certification and leave behind insufficient testing methods? With Total Phase’s Advanced Cable Tester v2, affordable, complete quality control is attainable.

The Advanced Cable Tester v2 comprehensively tests cables.

Interchangeable connector module tests DisplayPort to DisplayPort cables.

This tester supports video cables including HDMI (specifications 2.1 and below) and DisplayPort (specifications 2.0 and below), as well as other cable types including USB and Apple Lightning. It performs a complete assessment of each cable, testing for conformance to specification. The Advanced Cable Tester v2 tests DisplayPort cables for pin continuity, checking for shorts/opens/routings of all pins, including lane pins and wires that transmit video and audio data, DP hotplug pins and wires, and DP power pins and wires.

An example output of our Pin Continuity test for a passing DisplayPort 1.4 cable is shown below.

Pin Continuity Output of DisplayPort to DisplayPort cable

This tester also measures the DC resistance on all non-high speed wires, as well as tests the cable’s signal quality. Our signal integrity test measures the quality of the signal from one end of the cable to the other, and displays eye diagrams with masks per cable speciation to help visualize these measurements. If any portion of the eye-diagram is touching the mask, the cable will fail this test.

The captures below represent a passing and failing eye-diagram taken from the same DisplayPort 1.4 cable at 8 Gbps per channel. The left channel passes at this bitrate, while the right channel fails. This could affect the quality of signal and lead to corruption of the video or audio data.

Passing signal integrity test for a DisplayPort to DisplayPort 1.4 cable. Failing signal integrity test for a DisplayPort to DisplayPort 1.4 cable.

The Advanced Cable Tester v2 is specially designed for factory settings, offering a rugged design, ethernet connectivity, and local storage. In only a matter of seconds, factory personnel at any level can determine a pass or fail result and for only pennies per test. Read more on the cost-savings benefits of using the Advanced Cable Tester v2 here.

To learn more about how the Advanced Cable Tester v2 can test your DisplayPort cables, please contact us at 1-408-850-6501 or email us at sales@totalphase.com.

Understanding the Internet of Things (IoT) and Big Data

$
0
0

The information age is characterized by rapid technological change and digital transformation across industry verticals. During the 1990s, the World Wide Web became publicly available for the first time and saw widespread adoption and innovation with technology companies leading the way as the world's largest businesses created their earliest websites. The following decade beginning in the year 2000 was characterized by the growing proliferation of mobile phones, along with mobile apps, mobile marketing, and mobile commerce - businesses needed to re-think their marketing and communications strategy to reach customers on the new mobile platform. In the 2010s, innovations like cloud computing and the spread of AI technology allowed companies to access cost-effective computing power and data storage on demand, making it easier and cheaper to develop new software applications.

As we round the corner into another decade, you might find yourself asking "What's next? What will be the next technological revolution and how can I prepare for it?" If you follow the latest trends in digital tech, you'll conclude (as we have) that the next decade of development in connected technologies will be centered around the Internet of Things (IoT) and Big Data.

In this blog post, we're breaking down the massive industry trends towards IoT and Big Data. These technologies are not always discussed together, but as we'll show, they go hand-in-hand for organizations who are trying to reduce costs and enhance organizational efficiency through digital transformation.

Foundations of IoT and Big Data

Before we dig into the relationship between IoT and Big Data, let's understand each of these trends separately and the growing role they play in how companies across industry verticals are organizing their operations.

What is the Internet of Things (IoT)?

The IoT is the next logical step in the evolution of internet technology. 

In the 1960s, the first internet project known as ARPANET used packet switching to facilitate communications between multiple computers on a single network. In the 1970s, the TCP/IP protocol was created, setting the standard for how communications could be sent and received between networks. The internet did not exist as the World Wide Web until the early 1990s, when a computer scientist known today as Sir Tim Berners-Lee created formatting standards (HTML) and access standards (HTTP) that anyone could use to build their own website and make it publicly available.

Since the early 1990s, the basic structure of the internet as a network-of-networks has remained the same. What has changed the most about the internet is the devices that use it. At the beginning, these were just desktop computers. Later, wireless technology meant that more people accessed the internet using laptops and mobile phones, and later on tablets and other handheld devices.

This brings us to the IoT, a system of mechanical and digital machines, objects, and devices that contain embedded computers with two special characteristics: they have their own unique identifier and they can send and receive data over a network without direct interference from a human. The IoT creates a new paradigm where any device can act as a network endpoint that is capable of receiving operational instructions or transmitting sensor data about its environment through a network.

Internet of things big data

Organizations are adopting the IoT and deploying IoT-enabled devices at blinding speed. The total number of installed connected/IoT devices grew from 15.41 billion in 2015 to 26.66 billion in 2019 - that's three connected devices for every single person on Earth. The total number of IoT devices in deployment is expected to reach 75.44 billion by 2025.

What is Big Data?

The concept of big data was first introduced in the early 2000s by industry analyst Doug Laney who recognized that humans were generating and storing data at an increasing rate, and that this data might have some practical benefits if appropriately leveraged. The difference between "regular data" and "big data" can be described in terms of the three V's: volume, velocity, and variety.

Volume. Individuals and organizations living in the information age produce large amounts of data, but as that overall volume has increased, so too has the accessibility of data storage and computing resources to store and process it. A single connected/IoT device can generate a huge volume of data on its own and a single organization might have hundreds or thousands of these devices to track and monitor at a given time.

Velocity. Velocity is a measure of how quickly data is generated and processed. In the past, data was collected slowly and processed slowly. Today's IoT devices generate data on a nearly constant basis, enabling the real-time availability of big data that reflects current operational trends. The velocity of big data is an important component of its overall value - frequent data generation and rapid processing means that organizations can react more quickly to circumstances identified through the use of data.

Variety. Variety is all about the types of data that are being tracked and transmitted. In the context of IoT-enabled devices, there are temperature sensors, humidity sensors, pressure and proximity sensors, accelerometers, gyroscopes and other sensory instrumentation that can measure the environment and pass data back through the network to a central processing hub. 

In 2016, humans were already generating data at a rate of 44 billion gigabytes per day. By 2025, global data generation is expected to reach 463 billion gigabytes per day - a ten-fold increase in just about a decade.

The Connection Between IoT and Big Data

By now, you should be convinced that the IoT and Big Data will be the most important technologies of the next decade - now let's explore how they're related.

How Does Big Data Impact IoT?

The relationship between IoT and Big Data processing will continue to grow over the next decade, as people and organizations generate increasingly large data sets from a growing network of IoT devices. Devices connected to the IoT will act as a major data source for big data processing, enabling new technologies, products and services, and giving organizations unprecedented insight and visibility into their processes and operations.

As organizations generate and capture increasingly large data sets from the IoT, the next challenge will be to process that data efficiently, extract insights from the data and present those insights to a human operator in a readable format. Data aggregation tools that collect and standardize data from the IoT will be coupled with artificial intelligence and machine learning algorithms designed to discover patterns and trends in the data through predictive analytics. The merger of artificial intelligence and IoT technologies along with big data will deliver major benefits to organizations capable of supporting the necessary IT infrastructure.

Real-Time Data

Real-time data insights and analytics could be described as the "end goal" when combining IoT, Big Data, and machine learning technologies. IoT devices act as a data source, capturing data from sensors and feeding it through a network to a centralized processing hub. 

Big data analytics software tools will then comb through the data in real time, using the latest computing technology to parse through millions of data points faster than a human analyst ever could. Machine learning and artificial intelligence algorithms will be used to discover trends in the data, generate alerts, and communicate insights to a human operator through a user interface.

How Does IoT and Big Data Impact Companies and the Future?

Businesses today already understand the competitive advantage that comes with access to real-time operational, security, and business analytics. Over the next five years, IoT devices will increasingly provide the inputs for real-time big data analysis, giving organizations real-time feedback on the condition and performance of their assets. This technology will have implications in health care, manufacturing automation, transportation, and marketing, helping companies reduce their costs, eliminate waste, and operate more efficiently in an increasingly connected world. 

Total Phase: Tools for IoT Developers

Are you ready to build the future?

For embedded systems developers working on IoT devices, Total Phase offers a range of hardware and software products to help you create, program, test, and debug IoT devices. From host adapters to protocol analyzers including APIs, we've got everything you need to start developing your own commercial or proprietary IoT devices.

Ready to learn more? Contact our sales team for more information on how we can help you take advantage of the IoT and Big Data.

What is the Difference between Master Register Read and Master Read, and How Can I Best Implement those Commands?

$
0
0

Question from the Customer:

I am using the Control Center Serial Software with an Aardvark I2C/SPI Host Adapter as the I2C master. Can you help me understand the difference between Master Read and Master Register Read? The Master Read command appears similar to the Sequential Read command, an I2C read with a stop – what am I missing?

Response from Technical Support:

Thanks for your question! We’ll start with an overview about these two commands and then go into the details.

Control Center Serial Software Read Commands

Here is a summary of the differences of the two Read commands:

  • The Master Read command performs an I2C bus read operation.
  • The Master Register Read command performs an I2C bus write operation without stop, and then performs the I2C bus read operation.

Master Read Command

The Master Read command simply reads the data sent by slave on the bus. The value provided in the "Number of Data Bytes" field is the maximum number of bytes the master will accept in a single transaction. The master may receive fewer bytes than are specified in this field, but not more. If a slave does not have the requested number of bytes available, the remainder of the bytes will default to 0xFF; this is due to the pull-up resistors on the bus.

Master Register Read Command

The Master Register Read command follows the typical protocol to read a register on an I2C device in one operation: perform an I2C write with the register address, which is followed by a repeated start and an I2C read.

We recommend checking the datasheet of the I2C slave device and see if it follows this protocol. The implementation of Master Register Read depends upon the implementation specific to your device's data sheet. The following information is necessary when using the Master Register Read Command.

  • Register Address, which is different from the I2C slave address, can be entered in either decimal or hexadecimal notation. If using hexadecimal notation, preface the number with "0x".
  • Address Width specifies the size of the register address in bytes. If the provided Register Address exceeds this width, the least significant bytes of the Register Address will be used.
  • Number of Data Bytes specifies the number of bytes the adapter will attempt to read from the I2C slave. The adapter may receive fewer bytes than are specified in this field, but not more. Similar to the Master Read command, if the slave device does not have the requested number of bytes available, the remainder of the bytes will default to 0xFF due to the pull-up resistors on the bus.

You can easily set up Master Read and Master Register Read using Control Center Serial Software. Here is an example of using the Master dialog:

Control Center Serial Software Dialog for Master Write and Master Register Write

Examples of Executing Read Commands

This article gives an example of executing the Master Register Read Command and using batch commands to implement reading a master register.

Using the Aardvark I2C/SPI Host Adapter, How Do I a Perform "Master Register Read" in Batch Mode?

For more information about batch commands, please refer to Batch Mode section of the Control Center Serial Software User Manual

If you need more control for your setup, you can use Aardvark Software API. This article shows an example of using API to implement reading a master register:

How Do I Implement I2C Master Register Read with the Aardvark I2C/SPI Host Adapter?

For more information, please refer to the API Documentation section of the Aardvark I2C/SPI Host Adapter User Manual.

Additional resources that you may find helpful include the following:

We hope this answers your question. Want more information? You can contact us and request a demo that applies to your application, as well as ask about other Total Phase products.

Request a Demo

A Simple Guide to IoT Architecture

$
0
0

The Internet of Things (IoT) is the latest step in the evolution of connected information technology systems. A report published by Gartner predicts that the number of IoT connected devices deployed across the world will reach 20.4 billion by the year 2020 - that's nearly three times the total population of Earth. To put that into perspective, there were just 3.8 billion IoT devices deployed globally in 2015, and just 7 billion in 2018. As a growing number of consumers and corporations adopt IoT technology and solutions, the total number of internet-connected IoT devices is growing exponentially.

The IoT is a set of technologies that is both complex and elegant in its execution. We have the ability to build sensors that measure almost anything in the environment. If we connect those sensors to a smart devices with Wi-Fi and a microprocessor, we can collect readings from those sensors and send them to users or operators over the internet. With the right tools, we can turn that data into actionable information that a person (or machine) could use to make a decision or assess the health of a system or anticipate a problem.

Mobile remote control is where the "magic" of the IoT really happens. Instead of checking a sensor for a reading, readings are delivered to the user automatically. Sensors can also be connected to actuators, devices that allow things to do an action (switching a light or fan on or off, changing the speed of a motor, lowering the temperature in an environment, triggering an alarm, locking a door, etc.). Users can exercise remote control of these functions through smart devices that are connected to the IoT. 

IoT architecture refers to the set of underlying systems that are used to deliver services using the Internet of Things. Smart homes, automated warehouses, digital factories, and connected hospitals all rely on the same basic underlying infrastructure to deliver their critical IoT capabilities. If you are developing a product that will deliver services using the IoT, you can use this article to learn about IoT network architecture concepts and understand the basic components of IoT architecture.

iot architecture

 

The Three Layers of IoT Architecture 

IoT architecture can be described as a technology stack with three layers: the IoT device layer, the IoT gateway layer, and the IoT platform layer. Data originates in the device layer and passes through the gateway layer before entering the cloud where the IoT platform layer resides. Each layer plays an important role in the delivery of IoT services.

IoT Device Layer

The IoT device layer consists of all the smart devices that are connected to the system. Smart devices are products or assets that are embedded with sensors, processors, actuators, and the capability of transmitting data over the internet. They can collect data from their environment and share it with operators, users, other smart devices, and applications connected to the system. 

Smart devices can use many different types of sensors to collect data from their environment. An IoT device used in agriculture might include soil moisture sensors that measure the water content of the soil, humidity sensors that measure air moisture content, and temperature sensors that measure the atmospheric temperature. A smart home installation might include smoke sensors for detecting fire, touch and motion sensors for security, and a light sensor for automating lighting in the home. 

The simplest IoT implementations collect data from just one device, such as a lone home security camera. Other implementations may incorporate hundreds or even thousands of devices, requiring a more robust back-end infrastructure to manage the data volume and operations.

IoT Gateway Layer

The IoT gateway layer sits between the IoT device layer and the IoT platform layer. This layer consists of a physical device or software program that collects data from smart devices and transmits it to the cloud. The gateway layer offers two practical benefits to the IoT architecture: load management via data pre-processing and security.

Some smart devices are equipped with sensors that generate thousands or tens of thousands of data points every second. Consider a set of 12 networked high-definition security cameras, each one recording surveillance footage in 4K. If all of this data were directly uploaded to the cloud, there would be issues with bandwidth, response times, and network transmission costs, and the whole thing might be cost prohibitive. The gateway layer can consist of a dedicated software program that pre-processes data before sending it on to the cloud.

Gateway devices can also play a role in securing data transmission from smart devices. Features like tamper detection, encryption, and hardware random number generators can be implemented to prevent malicious attacks against IoT devices and secure data that is moving to the cloud.

IoT Platform Layer

Once data from the IoT is uploaded to the cloud, it can be processed by tools and applications in the IoT platform layer. The platform layer consists of Edge IT and cloud-based or physical data centers that play a role in data analytics, management, and archiving. Applications that provide data transforming, analytics, monitoring, and other functions and services exist in the IoT platform layer. The IoT platform layer also includes tools that visualize processed sensor data on user-facing devices. 

4 Stages of IoT Architecture 

The four-stage IoT architecture model offers a general framework for implementing a network of smart devices that collect data from the environment, pass that data to the cloud through internet gateways, use edge IT for basic analytics and pre-processing, and ultimately store data in data centers or in the cloud. 

Sensors/Actuators

Smart devices interact with their environment using sensors and actuators. Sensors capture data from the environment and relay it to data centers and the cloud via internet gateways and edge IT. Actuators are a kind of motor that can control or move a mechanical system. Actuators can be activated by operators using commands that originate in the cloud and are passed to smart devices through Internet gateways and edge IT. 

There are many types of sensors, including:

  • Accelerometers 
  • Color Sensors
  • Flow Sensors
  • Level Sensors
  • Light Sensors
  • GPS Sensors
  • Humidity Sensors
  • Proximity Sensors
  • Rain Sensor
  • Soil Moisture Sensors
  • Temperature Sensors
  • Tilt Sensors

Actuators in smart devices use low-power, high-efficiency electric motors. They may be plugged in or powered by battery.

Internet Gateways

The data captured by sensors begins its life cycle in analog form. Data from sensors must be aggregated, digitized, and transformed into a common format so it can be processed efficiently downstream. 

Internet gateway systems are typically located close to the sensors and actuators that generate the data. For example, a smart home system might include six HD cameras that each send data through a physical wire to an on-site data acquisition system (DAS). The DAS aggregates data from the sensor network, digitizes it, does some pre-processing, and may compress it to reduce its size before sending it down the pipeline. 

Edge IT 

Edge computing is all about facilitating computing transactions closer to the source to reduce latency and manage the load on data centers. Edge IT systems are the first stop for data that is uploaded from your sensors to the cloud. Here, specialized applications can be used to conduct analytics, parse large amounts of data for anomalies or KPI violations, and generate meaningful insights before passing them on to the data center. 

Data Center/Cloud

Data that requires deeper processing or analysis may ultimately be forwarded to a physical data center or to a cloud-based data storage server. Here, operators can implement cutting-edge technologies to analyze data, combining it with information from other sources to get a deeper understanding of how the system is behaving. 

Data analytics can be used to understand how sensor data varies over time and in relation to other variables. Machine learning technology can be used to automate and optimize actuators in response to sensory data or based on user patterns. There may also be some type of data visualization technology implemented that applies user business logic to data and presents information to users in an easily understandable format (tables, graphs, charts, etc.)

Summary

IoT architecture consists of many systems working together to facilitate two-way data transfer and communication between smart devices and the consumers and corporations that use them. 

As you design and develop your own IoT-enabled device, you'll need to plan ahead to ensure its compatibility and efficient functioning with other elements of the IoT architecture. The best way to verify that your IoT device works according to specifications is through rigorous programming, testing, and emulation with the Total Phase Promira Serial Platform. With our debugging tools, embedded engineers can streamline the development of IoT devices that fit seamlessly into existing IoT architecture models.

Request a Demo and we'll show you how you can use our development platform and debugging tools to make your project a success.

Which Host Adapters Can Operate as either SPI Slave or SPI Master and Run from One Computer?

$
0
0

Question from the Customer:

I have been using the Cheetah SPI Host Adapter as an SPI Master – now I need the functions of both an SPI Master and an SPI Slave. I see the Cheetah adapter only functions as an SPI Master.

For ease of use, I’m looking to use the same model for these functions. I need to operate these SPI devices independently on a PC, without having to “toggle” between the two devices.

  • What are your recommendations?
  • Also, in SPI Slave mode, can the adapter send data to the SPI Master Device in “normal” SPI communication?

Response from Technical Support:

Thanks for your question! Both the Promira Serial Platform and the Aardvark I2C/SPI Host Adapter can function as either a Master or a Slave device for SPI. Following is a summary of what both tools can do for you.

Summary of Master / Slave Emulation

Two Promira platforms, as well as two Aardvark adapters, can operate from one computer. They will each need a software interface, such as an instance of Control Center Serial Software, for each tool. Neither tool can change their role as Master or Slave “on the fly”.

Because you have been using the fast and powerful Cheetah adapter, we recommend the Promira platform for comparable performance levels.

Advantages of the Promira Serial Platform

In addition to higher clock speeds, the Promira platform provides many built-in advantages over the Aardvark adapter:

You will need the appropriate Active level application for your setup. Each application is licensed separately, and level 2 and 3 applications require licensing the previous level of application.

SPI Active - Level 1 Application

  • Clock speed up to 12.5 MHz for Master functionality
  • Clock speed up to 8 MHz for Slave functionality
  • Supports two GPIOs
  • One slave select
  • Supports Single I/O SPI

SPI Active - Level 2 Application

  • Clock speed up to 40 MHz for Master functionality
  • Clock speed up to 20 MHz for Slave functionality
  • Supports up to 12 GPIOs
  • Supports up to three slave selects
  • Supports Single and Dual I/O SPI

SPI Active - Level 3 Application

  • Clock speed up to 80 MHz for Master functionality (faster than the Cheetah adapter)
  • Clock speed up to 20 MHz for Slave functionality
  • Supports up to 16 GPIOs
  • Supports up to eight slave selects
  • Supports response size of up to 256 bytes
  • Supports Single, Dual, and Quad I/O SPI

Comparing the SPI Tools

For an easy comparison, here’s a table that shows you the primary features of the Aardvark and Cheetah adapters versus the Promira platform with the Active applications:

Chart of Promira Serial Platform vs Cheetah and Aardvark Host Adapters

We hope this answers your questions. Additional resources that you may find helpful include the following:

If you have questions about our Total Phase products, feel free to email us at sales@totalphase.com. You can also request a demo that is specific to your application.

What is IoT Data Analytics (Internet of Things)

$
0
0

Introduction to IoT Data Analytics

When the internet was first created, it was only accessible to a small group of private companies and government agencies. After the World Wide Web (WWW) was created in 1989 by Time Berners-Lee (who would later be knighted for his contributions), the internet became widely accessible and businesses around the world began coding their own websites and connecting with customers online.

The next major turning point in connected technologies was the mobile revolution of the early 2000s. As computer processors and data storage became smaller, faster, and cheaper, the world's leading electronics manufacturers released the first smartphones. In 2019, there are 5.1 billion unique mobile subscribers with nearly 9 billion total connected devices - that's more connected mobile devices than the entire population of the world.

The Internet of Things (IoT) represents the next major step in the evolution of connective information technology. With the IoT, connectivity has extended beyond mainframes, computers, and connected mobile devices to include physical objects of all kinds in a variety of commercial, industrial, and consumer applications. Any device that can be connected to the internet is an IoT device, and there are already billions of IoT devices deployed around the world today (and more on the way).

Internet of Things Analytics

Big Data & IoT Analytics - A Perfect Fit

The development of IoT solutions required processors that were cheap to manufacture and consumed little power while being capable of wireless communication. In all, three major factors have made it possible for billions of IoT devices to exist today:

  1. RFID - The widespread adoption and integration of RFID tags into objects. An RFID tag is a smart barcode that can be identified and tracked using electromagnetic fields. They have small on-board batteries, use very little power and can be detected by an RFID reader or scanner from hundreds of yards away.
  2. IPv6 - The previous Internet Protocol Version 4 (IPv4) standard allotted just 32 bits of space for IP addresses, and as a result, there was a growing realization that the number of available IP addresses would be exhausted. The new IPv6 standard allocates 128 bits of space for IP addresses, making billions more IP addresses available for assignment. Without the new IPv6 standard, it would be impossible for each IoT device to have its own unique IP address.
  3. Enhanced Wireless Speed - The increased availability and speed of wireless networks have accelerated the adoption of IoT devices, since more businesses and consumers have existing wireless infrastructure that can be used or modified for use with IoT devices. 

The concept of Big Data has grown in relevance and importance along with the proliferation of the IoT. The key difference between data and "Big Data" comes down to three factors: volume, velocity and variety. 

  • With more connected devices (hardware) and applications (software) floating around, humans and machines are generating a greater volume of data than ever before.
  • The velocity of that data is also increasing - it continues to be generated at faster and faster rates each day. 
  • There is also significant variety in the types of data being generated - textual, images, videos, audio, sensor data and metadata, just to name a few.

It is also useful to distinguish between human-generated data and machine data. Facebook users upload more than 900 million photos a day - that's human-generated data. Machine data includes things like sensor readings from IoT devices, computer-generated event logs from an application or operating system, telemetry logs, financial instrument trades, and others. The discipline of big data analytics has evolved to deal with the growing volume, velocity and variety of data that is being produced each day.

Big Data and IoT Data Analytics

IoT data analytics represents the implementation of Big Data management and processing techniques to IoT analytics applications. Companies or facilities that deploy a high number of IoT devices need big data analytics to collect data from across the network, clean and transform the data, aggregate it into a single system, and analyze the data in real time to make effective use of it. 

It may be difficult for the uninitiated to fully appreciate the quantity of data that can be generated and captured using IoT devices, so let's look at a current example. The United Parcel Service (UPS) headquartered in Atlanta, Georgia recently undertook a project to improve its delivery service using the IoT. Each vehicle in the fleet has been fitted with a connected device that uses sensors to capture data from the environment. 

Sensors are the key to data generation in the IoT. Different types of sensors can be programmed to collect information on temperature, humidity, other environmental factors, pressure, location, speed, acceleration, the orientation of a physical object in space, infrared, smoke, gas, chemicals, and many, many other things. In the case of the UPS trucks, sensors are used to collect data on roughly 200 different environmental and operational factors

The UPS delivery fleet consists of roughly 110,000 vehicles. Each one records 200 different types of data. The data will also be combined with metadata (date, time, truck number, driver, etc.) to make it more relevant and useful. Even if the trucks only transmit sensor data from their RFID chips to a centralized RFID reader once per day (somewhat useful, but not very useful), that's still at least 22,000,000 individual data points per day. 

Now, imagine that UPS wants real-time visibility into their operations using this sensor data so they decide to transmit data from the trucks every six seconds - that makes 14,400 uploads per day, generating over 300 billion unique data points. Big data processing and analytics is the major key to making sense of all of this data and extracting useful and actionable insights that can drive business results. 

The Los Angeles MTA has also tried to implement this type of real-time monitoring on their city buses allowing them to optimize truck usage and monitor traffic flows.  The Total Phase Komodo CAN Duo Interface was used to collect the CAN data from the city buses and transport it back to the MTA headquarters.

IoT Data Analytics Applications

IoT devices and big data analytics capabilities are becoming cheaper and more widely available, creating new opportunities for IoT analytics applications that drive innovation and business decision-making across industry verticals. While some companies choose to manage their own Internet of Things analytics, there are also IoT data services and IoT analytics services - companies that specialize in the effective processing of IoT data into business insights. 

With that in mind, let's review three innovative new industrial IoT applications

IoT Data Analytics in Agriculture

Farmers are using data from IoT devices to improve their crop yields, planning, and maintenance of agricultural operations. A company called The Climate Corporation is using IoT devices with sensors that measure soil quality and moisture, helping farmers determine how to rotate crops and when they should be watered. Farmers are also using IoT devices to collect data from farming vehicles and are using IoT drones for aerial imagery analytics.

IoT Data Analytics in Food Services

Restaurants and bars are using the IoT to help monitor their inventory and find more efficient ways to manage business. A company called I-TAPR2 technologies uses a wireless smart tap that monitors beer flow and helps food service managers determine which products are selling the most, when to order new inventory, which beverages it should focus on marketing and which ones it should stop carrying. 

IoT Data Analytics in Logistics Management

Through our UPS example, we've already seen how IoT data analytics can impact logistics management. UPS claims on their website that IoT data analytics has helped them find savings of over $400 million annually. That's mostly on the transportation side, but on the warehousing side of logistics there are even more ways to leverage the IoT. Warehouses can use sensors and robotics to optimize their layout, reduce labor costs, track inventory and orders through the supply chain, and automate inventory management to reduce errors.

Total Phase Builds Tools and Products for IoT Engineers

If you're building an innovative new product that will change the world with IoT data analytics, Total Phase is here to help. Our range of development and diagnostic products can help you save time and resources in product development, testing, and debugging, helping you innovate faster and reduce your time-to-market. 

Ready to learn more? 

Request a Demo for Your Specific Application. We'll show you how best to apply our hardware and software tools to your project.


To Verify the USB Data Path is Operating at Maximum Speed, How Do I Adjust Periodic Timeout and Polling?

$
0
0

Question from the Customer:

I am using the Beagle USB 480 Protocol Analyzer to debug some issues on my CDC ECM implementation on a STM-32 MCU. I have questions about the results I’m seeing with Data Center Software.

I am looking into the significance of the POLL and IN-NAK counts, as well as the Length that is reported in some records as bytes and records as time in msec or sec.

Are these values generated at my USB peripheral device in the STM32 or does the Beagle USB 480 analyzer tag these records?

IN-NAK data showing time-out

IN-NAK data without time-out

My important question is about polling. The polling number is the inverse to the data rate of the packets being processed: the smaller that number is, the closer we are to the data rate limit. For example, if the poll is showing 8 or 9, it is very close to the maximum speed that this data path can operate.

My issue - poll data numbers are only visible when I allow IN messages to be displayed. Is there another way to view polling data numbers?

Response from Technical Support:

Thanks for your questions! The first screenshot that you provided shows that after a specific period, timeout occurs. The msec measurements appear to be normal behavior. The periodic timeout can be filtered. For polling information, you can implement the Delayed-Capture option, support by the Beagle USB 480 Protocol Analyzer.

Why Periodic Timeout Occurs

A Timeout error occurs when no data was seen before the timeout interval occurred. This can occur during one of the following events:

  • No data was seen on the bus
  • A pause in the transmission of data exceeded the timeout interval.

If the combined time duration of all collapsed transactions is 2 seconds or more, the Beagle USB 480 analyzer and Data Center Software indicate this behavior as a Periodic Timeout. Timeout error is a generic error, which occurs when the capture of a transaction timed out while waiting for additional data (>250ms). This is normal, expected behavior and cannot be attributed to a bus or a hardware issue.

Filtering Periodic Timeout

You can filter out the Periodic Timeout. In the LiveFilter tab under USB 2.0, uncheck SOF/Keep-Alives and then click the Apply button.

SOFs Keep-Alives option with Data Center Software

Polling Data and Delayed-Download Capture

To view the polling data, we recommend using the Delayed-Download functionality with your Beagle USB 480 analyzer.

  1. To run a delayed-download capture, select the Delayed-Download Capture Mode option in the Device Settings dialog of the Data Center Software. During the delayed-download capture, there will still be a small amount of Beagle USB 480 analyzer traffic on the capture bus because the software pings the analyzer to retrieve capture statistics. If the monitored device is High-speed and shares its host controller with the Beagle USB 480 analyzer, we recommend enabling the Omit packets matching the Beagle analyzer’s device address option. This will filter out the few Beagle USB 480 analyzer packets that remain during the delayed-download capture.
  2. As the “remaining packets” are stored in the Beagle USB 480 analyzer’s hardware buffer during the capture, we also recommend enabling the Hardware input filter options. This allows a longer capture period, which prevents non-essential traffic from being saved in the hardware buffer.
  3. Once the capture settings have been set, click the Run Capture button to open the Delayed-Download Capture dialog.
  4. Set the polling interval for the capture. While polling during the capture, the Data Center Software checks the hardware buffer usage and displays it in the progress bar.
  5. This polling will generate traffic on the bus; polling can be disabled to eliminate this traffic by choosing Never for the polling interval.
  6. Start the capture by clicking the Start Capture. If polling is enabled, the progress bar will show the portion of the hardware buffer that has been filled with capture data. The progress bar will be updated every time the Data Center Software polls the Beagle USB 480 analyzer.Delayed Download Capture dialog from Data Center Software
  7. When the hardware buffer is full, the capture will stop and the dialog will indicate that it is ready to download the capture from the hardware.

For more information, please refer to the Delayed-download Capture sections of the Beagle Protocol Analyzer User Manual and the Data Center Software User Manual.

We hope this answers your questions. Additional resources that you may find helpful include the following:

More questions? You can send us your questions via sales@totalphase.com. You can also https://www.totalphase.com/demo-request/request a demo that applies to your application.

The Rise of Type-C Headphones and Its Key Features

$
0
0

USB Type-C has become a ubiquitous connector type in the world of USB and the cable industry. Its conceptualization introduced many enhanced capabilities and new features that provided users with unprecedented amounts of power, versatility, and speed. Because of its advanced capabilities, many device and cable producers have been eager to incorporate this technology into their designs, and it doesn’t show signs of stopping. In fact, Type-C has already joined forces with other technologies like DisplayPort and Thunderbolt, creating all new cable opportunities that will surely progress the way consumers interact with their devices.

USB Type-C has already made its mark in the cable industry, but this connector type is also being implemented within another type of technology – headphones.

As mentioned in our blog, “The Evolution to USB Type-C Headphones”, there are some advantages to Type-C headsets:

  1. Compared to the 3.5mm jack, the USB Type-C connector is more compact. Its smaller footprint requires less space.
  2. For devices that don’t need to charge while playing audio, a single connector can handle charging oraudio connectivity.
  3. Operational power is available for things like amplification or noise reduction.
  4. For the digital approaches, moving the Digital-to-Analog Converter (DAC) outside the handset potentially allows for greater isolation from electrical interference.
  5. Reduction of overall power consumption
  6. Support for new features such as hotword detection

Type-C headsets have been introduced in the market for some time, already being adopted by companies including IntelLeEcoHTC and JBL. Recently, Apple, who has pioneered some of the most widely-used portable media players around the world, made the announcement that they will allow third party manufacturers in Apple MFi program to produce Lightning to USB Type-C adapters for USB Type-C audio.

One Port to Rule Them All

Many device manufacturers are starting to ditch the 3.5mm jack and replacing it with just one port that can support a variety of functions. For USB supported phones in particular, the Type-C port is taking over including for digital audio.

Type-C headphones to replace standard headphone jack

Photo by Kaboompics

Problems with USB-C Charging and Audio Adapters

While having one port that performs it all can be an advantage, it has also brought up concerns from users who wish to play audio and charge their device at the same time. To combat this concern and rising demand for a solution, companies have released adapters to allow users to perform both functions concurrently, but users have reported that some adapters haven’t always performed as expected. In an article by the Verge, it notes: “Amazon is littered with many cheap, no-name adapters, but in my experience, many of them work poorly or simply don’t work at all.” In terms of audio, this can be due to the nonconformity of audio output standards between device, and for power charging capabilities, this can be due to faulty devices and connections.

Easily Debug your USB Applications

Type-C devices normally support powerful charging capabilities using USB Power Delivery to establish a secure connection and provide or consume the required and safe current and voltage amounts. When combining technologies together and creating a product that is new to the market, infrastructure issues can arise during development. To ensure the Power Delivery (PD) protocol is working properly, using the Total Phase USB Power Delivery Analyzer to passively capture the PD communication between two Type-C devices can help avoid any charging or enumeration issues. Specifically, with this tool, users can monitor PD traffic on the CC1/CC2 lines, as well as observe VCONN and VBUS measurements.

For a quick demonstration on how to use the Total Phase USB Power Delivery Analyzer, see our video below:

Total Phase provides numerous debugging and development tools for USB, including our line of USB Protocol Analyzers, USB Power Delivery Analyzer for Type-C devices, and our Advanced Cable Tester v2 to comprehensively test a variety of USB cables for pin continuity, DC resistance, E-marker validation, and signal integrity. Interested in seeing how Total Phase can support your USB projects? Email our sales team at sales@totalphase.com

How Do I Reduce the Latency of Data Being Delivered from a USB Port to the Computer?

$
0
0

Question from the Customer:

We are using the Beagle USB 480 Protocol Analyzer and Data Center Software to sniff Full-speed USB data. We observed that data is displayed first on the DUT display and displayed on the Data Center Software afterwards. This was not expected. In our setup, data should be displayed on the Data Center Software before reaching the display of the destination device.

This makes us wonder about time from the data arriving at the hardware edge of the Beagle USB 480 analyzer to being displayed in the Data Center Software. How can we reduce the latency of data being delivered via the USB port?

Can you please let us know what affects the time required for data to travel from the USB port to the Data Center Software? This will help us to assume some delta in our calculations to determine the time required for data to travel from the source to the destination.

In our setup, the Data Center Software is set to Auto Detect and Real Time mode.

Response from Technical Support:

Thanks for your questions! The Beagle USB 480 analyzer contains a 64 MB on-board buffer, which stores incoming data before it is displayed. The data is processed, decoded, and then displayed in the GUI or API functions. It is possible that there is a latency or other delay that affects the arrival of data during sniffing due to tGUI software, computer OS, and other system-related overheads and delays.

This latency cannot be changed in the Data Center Software. However, to decrease the latency, Beagle Software API Software can be used instead.

How to Decrease Latency with API

The API supports multiple operating systems (Windows, Linux, and Mac) and programming languages (C, Python, Visual Basic, and C#). Software examples are provided with the API package, as well as instructions to run and execute the software examples.

Buffer Settings and Recommended API Commands

The API command bg_latency() sets the capture latency to the specified number of milliseconds. The capture latency effectively splits up the total amount of buffering into smaller individual buffers. The amount of buffering is set with the API command bg_host_buffer_size().

After one of these individual buffers is filled, the read function returns. Therefore, in order to fulfill shorter latency requirements, these individual buffers will be set to a smaller size. Setting a small latency can increase the responsiveness of the read functions. If a larger latency is requested, then the individual buffers will be set to a larger size.

For more information about API commands, please refer to the section API Documentation in the Beagle Protocol Analyzer User Manual.

How Buffer Size Affects Latency and Overhead

There is a “fixed cost” of processing each individual buffer that is independent of buffer size; there is a trade-off. Using a small latency increases the overhead per buffered byte. A large latency setting decreases that overhead, but it also increases the amount of time that the library must wait for each buffer to fill before the library can process their contents.

Please note, this setting is distinctly different than the timeout setting; therefore, the latency time should be set to a value shorter than the timeout time. You can also affect the delivery time of the data with the selection of how data is captured.

Data Center Software Modes for Capturing Data with Reduced Latency

If you are capturing the traffic using the Packet view in the Data Center Software, we recommend changing to Sequential mode; the data captures will be much faster. Apart from any OS processes running in the background, with no other USB link traffic, packets are monitored at the speed of 40Mbps, with a lag of 1 packet (~250us). For more information about capture modes, please refer to the section Capture View in the Data Center Software User Manual

We hope this answers your questions. Additional resources that you may find helpful include the following:

More questions? You can send us your questions via sales@totalphase.com. You can also request a demo that applies to your application.

What is a System on Chip (SoC)?- A Basic Definition

$
0
0

The System on a Chip market is projected to grow to over $207 billion by 2023 according to a report released earlier this year. System on a Chip technology is found across all industries and is used in embedded systems as well as general purpose computing devices. However, despite its popularity, there is still ambiguity and confusion surrounding the term.

For example, questions like: “what’s the difference between a System on a Chip and microcontroller?” are not uncommon, even among professionals. Part of the reason for this confusion is the general way marketers use the term and the lack of a standardized set of characteristics for System on a Chip. Here, we’ll define System on a Chip, explain different use cases, and explore how you can analyze traffic from System on a Chip devices like smartphones.

What is an SoC (System on a Chip)?

Let’s start by answering the “what is System on Chip?” question and defining System on a Chip clearly.

A System on a Chip, or SoC, is a single integrated chip (IC) that includes the components normally found in a standard computer system. For example, on an SoC you may find a CPU (Central Processing Unit), RAM (Random Access Memory), storage, I/O (input/output) ports, and more. SoCs also generally strive for efficiency in terms of being small in size and low in power consumption.

While there is no single comprehensive list of System on Chip components that sets a bar for what is and is not an SoC, the litmus test should be: “does the system contain all the major components of a computer or electronic system on a single small low-power board?” If yes, it is fair to call the system an SoC.

SoCs break from the traditional approach to system architecture (e.g. with motherboards) where each component is discretely installed. This enables the creation of smaller and more efficient devices, driving innovation in the creation of netbooks, laptops, smartphones, and IoT (Internet of Things) devices.

Different Types of SoC

There are a variety of System on Chip SoC devices on the market today. As there are no standardized qualifications to what makes a system an SoC, we have seen different vendors take different approaches to SoC development and marketing. For example, in some cases, storage is available on the SoC, in others it is handled outside the system.

Generally, you will see SoCs either use a microprocessor or microcontroller along with the other peripherals that make the system complete. In this section, we’ll review some of the more common types of SoC.

SoCs that use Microprocessor

One of the most popular lines of SoCs with microprocessors is Qualcomm’s Snapdragon line of products. For example, the Snapdragon 855+ platform uses a Qualcomm Kryo 485 CPU. The compute power of the Kryo 485 is coupled with a GPU (Graphics Processing Unit), WiFi 6 features, an LTE modem, USB-C functionality, a camera, and more.

Snapdragon SoC processors are commonly found in mobile phones, tablets, and other smart devices. Because they enable low power consumption and high levels of computing power, they have gained popularity with engineers looking to maximize functionality with a limited footprint.

SoCs that Use Microcontroller

It is common for the terms microcontroller and System on a Chip to be confused. When you consider the fact microcontrollers are generally defined as single chip microcomputers, it is easy to see why. That sounds very similar to the definition of System on a Chip.

So what is the difference? The functionality of a microcontroller is a bit more limited. For example, the peripherals that can be used with a microcontroller are limited compared to a full SoC. Similarly, a microcontroller does not generally enable the same level of functionality as a System on a Chip. That is, while an SoC can run a full operating system on a smart device, microcontrollers alone generally run an individual program.

It is very common to find microcontrollers used as a component of SoCs. For example, Texas Instruments’ CC2540 is built around the 8051 microcontroller core. The CC2540 extends the functionality of the 8051 with in-system programmable flash, RAM, Bluetooth functionality, and more. SoCs like the CC2540 and others that use microcontrollers are a big part of what is driving growth in the world of embedded systems.

Applications for ASIC (Application-Specific Integrated Circuit)

While SoCs enable general purpose computing for mobile devices, it is also common to find them used for ASIC (Application-Specific Integrated Circuit) applications as well. With ASIC, the IC is designed to carry out a specific task as opposed to general purpose computing. These purpose built applications are quite common in the embedded systems industry.

Research published by Semico suggests the ASIC/SoC market will grow at a healthy 8.5% CAGR (Compound Annual Growth Rate) through 2023. What are the key drivers of the growth? IoT, Industrial IoT, and Artificial Intelligence (AI). These embedded System on Chip SoC technologies will revolutionize a number of industries in the years to come. For more on the potential of AI and embedded systems, check out The Future of AI and the Embedded System.

Common Uses of SoC

The potential use cases for SoC seem to be almost limitless, with applications ranging from IoT in healthcare to smart home technology. However, some of the most frequent applications for SoC  are smartphones, netbooks, tablets, and similar smart devices. These devices demonstrate the power of SoC to deliver significant general purpose computing functionality while reducing power consumption and space.

Here are a few popular examples of SoCs in modern smart devices:

  • Samsung Galaxy S10 5G- This powerful 5G Android phone uses the Qualcomm Snapdragon 855 SoC platform. Despite the small size of the Snapdragon, it is capable of enabling virtual reality experiences and streaming real-time 4K video.
  • Samsung Galaxy Tab S3- The popular S3 tablet was the first mobile media focused tablet to be powered by the Snapdragon 820. The Snapdragon 820 enabled fast connectivity, high-quality graphics, and efficient battery use for users of the tablet.
  • Intel NUCs based on Gemini Lake SoCs- A NUC (Next Unit of Computing) is a small PC from computing giant Intel. NUCs enable general purpose computing for home and small business users at a low cost. The NUC 7 PJYH and NUC 7 CJYH were based on Gemini Lake SoCs.
  • Fossil’s Smartwatch for Fitness- The Wear 3100 platform is a line of SoCs that has enabled the development of a variety of smartwatches.  Late last year, Fossil released their Fossil Sport Smartwatch which used the Wear 3100 platform. A number of other manufacturers also use the chip, enabling vendors other than Apple to compete in the smartwatch space.

How to Analyze Traffic From Smartphones Using SoC

As we have seen, smart devices are some of the most common applications of SoC technology today. This makes it important for engineers and QA testers in the industry to understand how to debug and analyze traffic from smartphones and similar devices that use SoC technology. The Beagle USB 480 Protocol Analyzer and Beagle USB 480 Power Protocol Analyzer are excellent tools for doing just that. By connecting to the communications bus of an SoC enabled smartphone, engineers can capture granular data down to the protocol level to enable better analysis of software and hardware.

Coupled with Total Phase Data Center Software, the Beagle USB 480 analyzer enables low-level protocol-level data to be translated into a class-level view in real time. Why is this important? Because with the raw protocol level view, understanding the information is difficult. Transforming it to class-level makes information human-readable and simpler to understand.  For an example of the benefits of real-time class-level USB analysis, check out this video.

In addition to real-time data analysis, the Beagle USB 480 analyzer enables you to capture USB data and save it to a file including ASCII data. For a step-by-step walkthrough on saving USB data to a file, check out this FAQ.

Interested in learning more?

In this piece, we answered the “what is system on chip?” question and explored some of the real-world applications of SoC technology and SoC in computer systems. We also explained how Total Phase protocol analyzers can enable embedded systems engineers and testers to debug and analyze SoC-based technologies like smartphones.

If you are not sure which Total Phase products are right for you, check out this USB Analyzer Product Guide. Alternatively, if you’d like to work with our team of experts and see a demonstration of Total Phase products in action, schedule a demo today!

Can the Promira Serial Platform Support and Generate the Master and Slave SPI Waveforms that I Need?

$
0
0

Question from the Customer:

I am looking for a programmable SPI interface adapter that will support communicating with two devices. For both devices, the clock speed is 10MHz and the clock is only active during transmit.

I’ve been looking for the right tool, and your Promira Serial Platform, along with the SPI Active - Level 1 Application, looks promising. Is the Promira platform capable of emulating and manipulating the waveforms and timing I need? Here are the key points:

  • Insert clock gaps between each word
  • Insert delays before and after each transmission
  • Support for words of 16-bit length
  • Ensure the serial clock is only active during communication, not always active

Response from Technical Support:

Thanks for your question! The Promira Serial Platform is an advanced serial device, and with the appropriate SPI Active level application and Promira Software API, the waveforms you need can be supported. Here’s an overview of the Promira platform, followed by recommended API commands for the specifications of your SPI waveforms.

Promira Serial Platform Overview

In addition to setting up the waveforms, the Promira platform supports other important features:

  • Signal level shifting 0.9 V – 3.3 V
  • USB 2.0 connectivity
  • Ethernet connectivity
  • Support SPI, I2C, or eSPI protocols
  • Provide up to 200 mA of power to target devices

Here is a summary of the SPI Active applications, which are licensed separately:

Using API for SPI Waveform Requirements

Promira Software APIs support 64-bit platforms, Windows, Mac, and Linux operating systems, and multiple program languages with a shared library for C, C#, Python, .NET, and VB.NET programming languages. Functional examples are also provided, which can be used as-is or modified for your specifications.

Following are our recommendations for deploying the desired word lengths and timing delays. For more details, please refer to the API Documentation section of the Promira Serial Platform I2C/SPI Active User Manual.

Word Length

The Promira platform supports SPI data word sizes from 2 bits up to 32 bits per word.

The API commands that we recommend are Queue SPI Master Write (ps_queue_spi_write) or Queue SPI Master Write Word (ps_queue_spi_write_word) API. The minimum word length is 2 bits; the maximum word length is 32 bits.

Timing and Delays

Here are the SPI APIs (specific to SPI) that can be used to insert delays in the bus:

ps_spi_configure_delays: configure a user-definable delay between words, which can be 0 or greater than 1. No gap is included after the last word.

ps_queue_spi_delay_cycles: queue a delay value on the bus in units of clock cycles. The clock cycles are set with ps_spi_bitrate.

ps_queue_spi_delay_ns: queue a delay value on the bus in units of nanoseconds. The accepted values are greater than zero and less than or equal to 2 seconds.

When Serial Clock is Active

The Master Device generates the serial clock (SCLK) . As shown in the diagram below, SCLK is only active when serial select (SS) is pulled high or low to select a Slave Device with which to transfer IO data.

Promira SPI Master/Slave Waveform

We hope this answers your question. Additional resources that you may find helpful include the following:

You can contact us and request a demo that applies to your application, as well as ask questions about our Promira Serial Platform and other Total Phase products.

Request a Demo

How DisplayPort Alt Mode is Enabled over a USB Type-C Cable

$
0
0

In the world of cables, USB Type-C is considered to be the most powerful and versatile connector type to date. One of Type-C’s features that allows for its flexibility is Alt Mode, which can optionally support non-USB signals including DisplayPort (DP) technology. This allows users to take advantage of multiple technologies all through a single cable, maximizing efficiency.

In this article, we’ll discuss how DisplayPort protocols operate on Alt Mode over a Type-C cable and how it compares to the operation of a standard DisplayPort cable.

Inside Type-C and its SuperSpeed Pairs

A USB Type-C cable connection can act as a USB host, USB device, USB-PD power consumer, USB-PD power supplier, and as a DisplayPort video connection. Inside the Type-C connector, there are 24 pins that serve a variety of functions that make these configurations possible. Specifically, the DisplayPort video protocol can be configured using Alt Mode, which can be carried out over the USB Type-C standard.

USB Type-C cables include four pairs of SuperSpeed wires, which can be used for high speed data transfer or can be configured as Alt Mode to support third party protocols. There are four differential high speed lanes – for USB there are two SuperSpeed receiver differential pairs and two SuperSpeed transmitter differential pairs, sometimes called a lane. With USB specification 3.2 and the 2x2 mode, the cable can support bidirectional bandwidth of up to 20 Gbps by using two lanes simultaneously with each lane carrying 10 Gbps, thus creating a dual lane operation that can reach up to 20 Gbps.

USB Type-C Cables can support DisplayPort via Alt ModePhoto by Tony Webster on Flickr

Alt Mode Configuration Supports DisplayPort Protocol

Because DisplayPort signaling supports packetized data transmission like USB, enabling DP through Alt Mode on the differential SuperSpeed wires is possible. With USB 3.2, two or four of the differential high speed lanes can be used for Alt Mode. If USB data transmission is required, it will use two lanes, leaving the remaining two for Alt Mode, otherwise, all four lanes are available for Alt Mode.

In the two lane mode, the Alt Mode for DisplayPort protocol can support 4K resolution at 60 Hz, or HDR 4K at 60 Hz using 4:2:0 and 12bpp. If all four lanes are used to enable Alt Mode for DisplayPort, it will allow DisplayPort to operate at its full performance at up to 8K x 4K at 24bpp, 60Hz refresh, 4:2:0 format for one display, per the DisplayPort 1.4 specification. With the recently released DisplayPort 2.0 specification and the coming soon cables, this increases to an incredible 16K x 8K at 30bpp, 60 Hz refresh, 4:4:4 HDR format with compression. In either configuration, the DisplayPort Aux channel uses the SBU pins within the Type-C connector.

Because certain pins are required to be included and available for use within a Type-C cable no matter the configuration, it allows users to expect certain features including USB and video data transfer and power charging. Even if all SuperSpeed lanes are being used for full DisplayPort performance, USB 2.0 High-Speed pins will become activated, allowing the user to obtain at least a 480 Mbps bandwidth. Additionally, CC and GND lines are also required to be available, ensuring that the USB Power Delivery protocol is available to support USB Power Delivery device operation.

Obtaining DisplayPort Performance through a Type-C Cable

How can the Type-C connector support full performance of DisplayPort 1.4? It has to do with how the SuperSpeed lanes are configured. The USB protocol transfers data bidirectionally, allowing it to transmit and receive data per lane from one end of the cable to the other using one positive and one negative wire each. Per the USB 3.2 Gen 2x2 specification, one lane can transmit up to 10 Gbps, so if both lanes are used to transmit data concurrently, this means that 20 Gbps can be transmitted and 20 Gbps can be received over the 4 lanes.

The DisplayPort protocol works a little differently where it only transfers data in one direction. In a regular DisplayPort 1.4 cable, there are four data lines that transfer DP data at 8.1 Gbps unidirectionally; this configuration is possible within a passive full featured USB Type-C to Type-C cable by configuring four USB data lines as Alt Mode and having them transfer DP data in one direction per lane, over four lanes. With DisplayPort 1.4 supporting a total of 32.4 Gbps over 4 lanes, full performance of DisplayPort is possible with the USB Type-C cable.

More on the Latest Standard DisplayPort Cables

The Type-C connector has proven its worth in the industry with its continuous adoption and partnership with numerous technologies, including video protocols like DP. While Type-C allows the DP specification to operate at its full performance, standard DisplayPort cables are still very viable for users looking to transmit video signals, as DP is widely implemented in many devices including televisions, game consoles, and laptop computers. For a more in-depth analysis on the DisplayPort and its latest features between DP specifications 1.4 and 2.0, visit our article, “DisplayPort 2.0 is the Latest DisplayPort Spec – How does it Compare to DisplayPort 1.4?”.

Testing Cables to Ensure Compliance and Safety

Ensuring cables are properly and safely assembled is imperative to allow users to exploit the advanced features Type-C offers. There are multiple factors that can prevent the cable from operating per the standard; anything from misaligned pins or wires to poor signal quality can cause functionality issues for USB and Alt Mode for the DisplayPort protocol. With Total Phase’s Advanced Cable Tester v2, it’s easy to verify that all aspects of the cable, including pin continuity, DC resistance, E-marker, and signal integrity are up to standard. To verify all pins including SuperSpeed pins are aligned correctly without any shorts, opens, or routings, the Advanced Cable Tester v2 provides a complete overview of each pin with a dynamic visualization of results for easy debugging. The signal integrity tests allow testers to measure the quality of the signal on the High-Speed and SuperSpeed pairs up to 12.8 Gbps per channel to ensure that the cable can handle the appropriate bandwidth per the specification.

Pin Continuity test ensures accurate pin alignment

 

Signal integrity tests shows quality of the signal within cables.

While many manufactures go through compliance programs to keep up with the latest USB and VESA specifications and certifications, it’s not the only hurdle to overcome to keep bad cables from landing in the hands of consumers. Quality testing beyond certification is vital for mass scale production, however, even common quality control methods like statistical process control and functional testing are not foolproof. With Total Phase’s Advanced Cable Tester v2, manufactures can quickly and affordably test USB, Apple Lightning, HDMI and DisplayPort cables without the fear of overlooking specific criteria needed to meet cable specifications.

To learn more about how the Advanced Cable Tester v2 can help ensure your cables are up to standard, please contact us at sales@totalphase.com.

The Growing Use of Micro Electro-Mechanical (MEMS) Devices to Improve and the Maintain Health of Machines and People

$
0
0

MEMS (Micro Electro-Mechanical Systems) technology supports motion sensors that detect, report, and collect information on anything that moves. The data these sensors generate are applied to many aspects of our daily lives, ranging from necessary and practical safety standards to augmented reality entertainment. This technology is applied in many ways that can affect and enhance our daily lives.

Artistic view of MEMS

Image by Intographics

About Motion Sensors and IMUs

There are three sensors that detect specific types of motion: accelerometers, gyroscopes, and magnetometers.

  • An accelerometer senses tilt, acceleration, vibration, and impact. With calculations and sampled measurements, acceleration measurements can be used to determine speed.
  • A gyroscope senses rotation relative to an axis.
  • A magnetometer detects magnetic fields on Earth. Like a compass, which also responds to magnetic fields, a magnetometer indicates which way is North.

These components are often used together either on a board or integrated in an Inertial Measurements Unit (IMU).  An IMU may comprise of two or three sensors. For example a 6-axis IMU contains a 3-axis accelerometer and a 3-axis gyroscope; adding a 3-axis magnetometer creates a 9-axis IMU

Accelerometers and Physical Safety

An accelerometer is part of many safety devices. It is often used to trigger an alarm when abrupt acceleration occurs, helping prevent physical damage. In passenger vehicles, when impact occurs beyond the designated safety threshold, airbags are quickly inflated to protect the driver and the passengers. To ensure the life-saving response is triggered the moment it is needed, high G accelerometers are used.  When a sudden change in speed occurs, the G-sensor outputs a large value that is used to invoke the necessary quick response.

Accelerometers and Maintenance

Accelerometers are also used for maintenance. With the ongoing stress of motion, when moving parts eventually become worn or misaligned a new vibration occurs. By monitoring active machinery, accelerometers detect such vibrations at an early stage, long before you can feel or see the effects of wear and tear. This capability of detecting the minute beginning of wear and tear allows lower-cost maintenance to be performed, long before significant and expensive damage occurs.

Sensors for Monitoring and Maintaining Human Health

MEMS are also used to monitor the health and wellness of people.  The use of micro-devices is growing in medical health care, which benefits the physicians and the hospital staff, as well as the patients.

For critical care, frequent checks are necessary to monitor the patient’s condition: is the patient stable, getting better, or getting worse? With micro-sized, low-power components, it becomes cost-effective to continuously gather information about the patient’s vital signs. To analyze the data from biosensors, Philips Guardian developed the Early Warning System (EWS). Vital signs are transmitted from the patient to the EWS. The information is processed with advanced algorithms that provide caregivers with the status of a patient’s current condition. When medical intervention is needed, the EWS sends an alert.

Compared to frequent and routine checks, this ongoing collection of information provides a much more detailed insight into a patient’s condition, making it possible to detect the development of problematic issues much faster and without additional effort from the medical staff.

Consumer Health Products

For those personally interested in maintaining and improving their health, many consumer products are available to monitor daily exercise: how many steps and staircases walked, pulse rate, quality of sleep, calories burned and more.  Fitbit is a popular activity-tracker that uses 3-axis accelerometers to monitor physical activities.  Some Fitbit products include micro GPS receivers that show the “map” of activity, including the pace and elevation. With Fitbit apps, the components built in a smartphone can be utilized to monitor and record basic information such as steps, distance moved that day, and the number of calories burned.

Enhanced Self Care

In addition to achieving goals in physical exercise, monitoring one’s own health has encouraged many patients to manage their care with prescribed medications on a timely basis.  Evidation Health conducted a study, and found that when using wearables and apps, patients were 1.3 times more likely to take their prescriptions on a more regular schedule. This practice is essential for chronic, life-threatening conditions such as diabetes and hypertension.

Smart Clothing

Smart clothing is available for gathering information from the electrical impulses of the human body. The textile is made of conductive fibers that are used to conduct the electrical signals, which are delivered to a module to process and transmit information. For convenience, the processing module is attached to the clothing. One such product on the market is Bioman+, which is manufactured by AiQ Smart Clothing. This high-tech apparel is available as vests, t-shirts, and sports bras. These garments are typically used for monitoring the heart rate of endurance and extreme sports participants, as well as remotely monitoring the heart conditions of patients for cardiac rehabilitation and other heart-related conditions. Like most clothing, this advanced textile is machine washable.

MEMS and Electronic Games

In the world of MEMS and wearables, high-tech gamers are a growing audience. Compared to medical devices, gaming wearables have a way to go to catch up. But they are moving forward.

Many electronic games are acquired everyday.  For example, since July 2016, over one billion apps of Pokémon Go, a mobile reality game, have been downloaded to mobile devices. Over 100 million users actively play this game every month. In this augmented reality game, images of Pokémon avatars are displayed on the game’s map on the mobile device, then players physically pursue in a virtual world that is mapped over the physical world. Pokémon Go is more than a hand-held interactive toy; it is an appealing electronic game to get people to go outside and play.

To motivate players to move their eyes away from their handheld screens, a companion device was developed: the Pokémon Go Plus that can be worn as bracelet or clipped to your clothing.  This allows users to put their smartphones into their pocket but still continue to play.

Pokemon Go Plus Bracelet with MEMS

Image from Nintendo Life

Using Bluetooth technology, this technical companion tells you where you are in the game. When you are close to a Pokémon, the bracelet vibrates and flashes light. You can then strategize how to capture that Pokémon. A non-wearable alternative is the Poké Ball, which inspires the action of throwing a ball to capture a Pokémon. The ball is not intended to be thrown; instead, the sensors within the ball measure the potential velocity, elevation, and angle based on the measurements from the player’s wrist and arm motions – the act of throwing, but not actually throwing the ball.

Total Phase and Embedded IMU Devices

MEMS devices monitor and provide data about physical movements. I2C and SPI embedded devices can be monitored, tested, and programmed with Total Phase I2C and SPI tools. Here is a video that shows how to use the Aardvark I2C/SPI Host Adapter and the Beagle I2C/SPI Protocol Analyzer to evaluate and prototype an I2C-based system that includes a 3-axis accelerometer.

If you would like more information about using Total Phase tools to develop, test and evaluate your product designs, we invite you to request a demo that is specific to your application.

 


What is the Importance of Embedded Networking

$
0
0

Embedded systems are small computers that are implemented as part of a larger system or product and designed to execute a specific function or application. They include a processing unit, usually a general-purpose microcontroller, along with on-board peripheral components such as I/O ports, memory (program memory, RAM and EEPROM), A/D converter, oscillators, and more. An embedded system may be connected to sensors that collect information about the environment and actuators that are used to trigger functions within the system.

What is Embedded Networking?

Embedded systems first originated in the mid-1960's - more than a decade before the first personal home computers and nearly 25 years before the introduction of the internet. One of the earliest applications of an embedded computer system was the Apollo Guidance Computer, introduced in 1966 to support guidance, navigation, and control of the Apollo spacecraft during NASA's lunar missions of that decade. Today, embedded systems are used in a range of industrial, commercial, and residential applications that range from controlling manufacturing systems to enabling vehicle safety features to powering home security systems and smart appliances.

Advancements in the technology of digital telecommunications have led to the development of embedded networking, a practice the expands the range of potential applications for embedded systems in a variety of contexts. The field of embedded networking deals with the network design and topology, hardware devices, and communication/data exchange protocols needed to connect and exchange information between embedded systems. 

Embedded systems engineers today have access to a range of wired and wireless communication options for implementing networking capabilities into their embedded systems. Effective design of an embedded networking product requires the selection of a protocol stack that enables the desired networking features and communication patterns while managing design constraints such as memory and power consumption. Embedded systems form the basis for the Internet of Things (IoT), networks of devices whose capabilities depend on internet connectivity.

For engineers that may be new to building networked embedded systems, we offer this basic embedded networking introduction. We'll look at several implementation models for embedded networks and show you how the most common embedded systems have implemented networking to support critical functions and applications in real-world settings.

Embedded Networking System Ethernet is one of the most popular technologies for networking embedded devices with the internet.

Image courtesy of Unsplash

Embedded Networking Introduction: The OSI Model

Our discussion of embedded networking begins with an overview of computer networking systems and how they function. The earliest conceptual model of computer networks was developed by the International Organization for Standardization (ISO) in 1984 and is known as the Open System Interconnection (OSI) model. 

The OSI model itself is conceptual in nature - it does not include any actual specifications for network implementation. However, the OSI model does provide a framework for understanding the components of a complete network communication system. As we will see, many of today's most commonly implemented networking technologies use features and protocols that reflect parts of the OSI model.

The OSI model defines a seven-layer architecture for a complete communication system:

1. Application Layer

The application layer is the top-most layer of the OSI model. Data transmissions frequently originate in the application layer of the origin device and terminate in the application layer of the target device. This layer deals with the identification of services and communication partners, user authentication, and data syntax. Some common application layer protocols include hypertext transfer protocol (HTTP), Telnet and file transfer protocol (FTP).

2. Presentation Layer

The presentation layer is a software layer that formats and encrypts data that will be sent across a network, ensuring compatibility between the transmitting device and the receiving device. The presentation layer includes protocols such as ASCII, JPEG, MPEG.

3. Session Layer

For data transfer to occur between applications on separate devices, a session must be created. The purpose of the session layer is to manage, synchronize, and terminate connectivity between applications, ensuring coordinated data exchange while minimizing packet loss. The session layer can provide for full-duplex, half-duplex, or simplex communications.

4. Transport Layer

In the OSI model, the transport layer receives messages from the data layer and converts it into smaller units that can be efficiently handled by the network layer. In protocols such as TCP/IP, the transport layer adds a header to each data segment which includes the port of origin and the destination port address - this is called service point addressing. Service point addressing ensures that a message from the transmitting computer goes to the correct port once it arrives at the destination computer. The Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are popular transport layer protocols for devices that connect to the internet.

5. Network Layer

The network layer provides the features and functions that transfer data sequences from the host device to a destination device. Along with routing network traffic and reporting delivery errors, the network layer divides outgoing messages into packets and assembles incoming packets into messages. Network layer devices use protocols such as IP, ICMP, and IPX.

6. Data Link Layer

Data packets are encoded and decoded into bits in the data link layer, which may be divided into two sub-layers: media access control (MAC) and logical link control (LLC). Hardware network interface controllers are typically assigned a MAC address by the manufacturer that acts as a unique device identifier and network address within a network segment. While the MAC layer supports physical addressing, the LLC layer deals with data synchronization, error checking, and flow control. Protocols for the data link layer include IEEE 802.5/ 802.2, IEEE 802.3/802.2, and the Point-to-point protocol (PPP).

7. Physical Layer

The physical layer defines the electrical and physical requirements for networked devices with control over the transmission and reception of unstructured raw data over the network. The physical data also manages data encoding and the conversion of digital bits into electrical signals. Devices that operate at the physical layer include network interface cards (NICs), repeaters, and hubs.

With the OSI model as a basis, embedded systems engineers have several options for how to implement networking for their embedded devices. Networked systems are designed based on specific application needs and constraints such as cost, power consumption, and memory. Not all embedded systems require all of the functionalities that are expressed in the OSI - it is up to embedded engineers to determine which features are required and to implement suitable protocols from the required layers.

Three Different Types of Embedded Networking

1. Embedded Networking with CAN Bus

The Control Area Network (CAN) protocol was developed to meet the embedded systems requirements of the automotive industry. CAN is a specification for a serial network that can be used to establish local connections between the microcontrollers in a motor vehicle. The CAN bus protocol is a two-wire, half-duplex system that works well for applications that demand a high-speed transfer of short messages.

The specifications for the CAN bus protocol are described in the international standard ISO 11898:2003. The specification includes requirements for the physical layer and data link layers of the network, leaving individual engineers or manufacturers to implement other high-level protocols of their choosing to satisfy additional networking requirements. 

2. Embedded Networking with I2C Bus

The I2C communications protocol for embedded systems was invented by Philips Semiconductor in 1989. I2C is a two-wire, half-duplex serial communication protocol with a multi-master, multi-slave architecture whose primary application is short range, intra-board communication. Due to the limitations of the I2C protocol's 7-bit address space and the total bus capacitance of 400 pF, serial communication using the I2C protocol is only effective when the connected devices are less than a few meters apart. 

I2C implementations include features that correspond to the physical layer, data link layer, and network layer of the OSI network model. The physical layer consists of the physical requirements for transmitting data: two open-collector bus lines where the bus pulls the line low during communication and releases it when idle. I2C features that correspond to the data link layer protocol include bus arbitration and clock stretching - mechanisms for error checking and data flow control. I2C's physical and logical addressing features are typically associated with the network layer of the OSI model.

3. Embedded Networking with Ethernet

Embedded devices that connect to local area networks or the internet can be implemented using Ethernet technology. Ethernet connections are often implemented as part of a protocol stack known as the Internet Protocol Suite, sometimes referred to as TCP/IP. The TCP/IP protocol stack includes four layers that are closely analogous to the OSI network model:

  1. Application Layer - The application layer of the TCP/IP protocol stack combines functions that are represented in the application layer, presentation layer, and session layers of the OSI model. Communication protocols for the application layer include HTTP, FTP, DNS, SMTP, TELNET and others.
  2. Transport Layer - The transport layer of the TCP/IP protocol stack implements either the transmission control protocol (TCP) or the user datagram protocol (UDP). The TCP is ideal for applications that require reliable data streaming, whereas UDP can be implemented for embedded systems where reduced latency is valued over data transfer reliability.
  3. Internet - The internet layer of TCP/IP provides network layer functions from the OSI model. In the internet layer, each device is assigned an IP address according to the IPv4 or IPv6 standard. The internet layer transports network packets from a host device to a target device specified by its IP address. It supports processes and functions for sending and receiving packets, as well as detecting and diagnosing errors.
  4. Network Interface - An ethernet networking interface card allows an embedded system to connect to a network physically using a twisted-pair or fiber optic ethernet cable. Ethernet provides networking capabilities that encompass the features of the physical layer and the data link layer of the OSI model.

Embedded Networking for the Internet of Things (IoT)

Embedded systems engineers have pioneered new protocols with features that are more favorable for operating devices in the IoT. IoT devices come with a range of requirements for size, cost, data transfer rate, serviceability, power, and onboard computing capacity. 

While the traditional internet protocol suite is suitable for computers and some types of embedded systems, some of its protocols are not useful for the requirements of many devices in the IoT. Protocols like XML, HTTP, TCP, and IPv6 produce a large data overhead with each transmission that can be inefficient for smaller data transfers that frequently characterize IoT devices. New communication protocols such as MQTT, LoRa, SIGFOX, Weightless, WirelessHART, Zigbee, 6LoWPAN, and DTLS are being used to provide more efficient web-based communications while limiting data overhead for IoT devices.

The greatest challenge for embedded networking with IoT devices is anticipating which standards and technology will remain relevant in the future. 

Summary

Embedded networking presents a unique challenge for engineers, whether the application entails networking microcontrollers with each other in a closed system or implementing a physical ethernet connection to support LAN, WAN, or internet connectivity. To succeed, developers must be familiar with the basic functioning of computer networks and effectively adapt this knowledge to account for the application-specific device requirements and limitations associated with their embedded system. 

With the OSI model as a foundation, embedded engineers can better understand the features required to support network connectivity and adopt a protocol stack for their devices that suits its unique functions and requirements.

Total Phase empowers embedded engineers with the diagnostic tools they need to successfully implement embedded networking with I2C and CAN bus along with other leading protocols. With our high-performance bus monitoring tools, product engineers can debug their embedded networking implementations more quickly, reducing overall time-to-market.

Ready to learn more?

Request a Demo

Microcontroller vs Microprocessor - What are the Differences?

$
0
0

Seasoned embedded systems engineers and product developers in the electronics industry should be familiar with the functional differences between a microcontroller and a microprocessor. Both types of components are essential for designing and building various types of electronic devices, yet it can be difficult to distinguish between them based on their definitions alone:

A microcontroller is a small computer on a single integrated circuit chip. A microcontroller typically contains one or more processor cores, along with additional peripherals (memory, serial interface, timer, programmable I/O peripherals, etc.) on the same chip.

A microprocessor is a computer processor that incorporates the functions of a central processing unit (CPU) onto just a few (and often only one) integrated circuits. 

 On the surface, it seems like microcontrollers and microprocessors have a lot in common. They are both examples of single-chip processors that have helped accelerate the proliferation of computing technology by increasing the reliability and reducing the cost of processing power. They are both single-chip integrated circuits that execute computing logic, and both types of processors are found inside millions of electronic devices around the world.

To help clarify the differences between microcontrollers and microprocessors, we've created this blog post comparing the two most common types of computer processors. We'll look at every difference between a microcontroller and microprocessor, from architecture to applications, helping you arrive at a clear understanding of which of these components should power your next computer engineering project.

Microcontroller vs Microprocessor

 

What is the Difference Between a Microcontroller and Microprocessor? 

The type of computer processor that you choose for your embedded system or computer engineering project will have a significant impact on your design choices and project outcomes, so it is crucial that you are fully informed about the main options and their unique features and benefits. Let's take a more detailed look at the difference between a microcontroller and a microprocessor.

Microprocessor and Microcontroller Architecture Explained

Microprocessors and microcontrollers perform relatively similar functions, but if we look specifically at the architecture of each type of chip, we'll see just how different they are.

The defining characteristic of a microcontroller is that it incorporates all of the necessary computing components onto a single chip. The CPU, memory, interrupt controls, timer, serial ports, bus controls, I/O peripheral ports, and any other necessary components are all present on the same chip and no external circuits are required. 

In contrast, a microprocessor consists of a CPU and several supporting chips that supply the memory, serial interface, inputs and outputs, timers, and other necessary components. Many sources indicate that the terms "microprocessor" and "CPU" are essentially synonymous, but you may also come across microprocessor architectural diagrams that depict the CPU as a component of the microprocessor.  You can think of a microprocessor as a single integrated circuit chip that contains a CPU. That chip can connect to other external peripherals such as a control bus or data bus that provide binary data inputs and receive outputs from the microprocessor (also in binary).

The key difference here is that microcontrollers are self-contained. All of the necessary computing peripherals are internal to the chip, where microprocessors deal with external peripherals. As we'll soon see, each of these architectures has its own unique advantages and disadvantages.

Microprocessor and Microcontroller Applications Explained

Microprocessors and microcontrollers are both ways of implementing CPUs in computing. So far we've learned that microcontrollers integrate the CPU onto the chip with several other peripherals, while a microprocessor consists of a CPU with wired connections to other supporting chips. While there may be some overlap, microprocessors and microcontrollers have relatively separate and distinct applications. 

Microprocessors depend on interfacing a number of additional chips to form a microcomputer system. They are often used in personal computers where users require powerful, high-speed processors with versatile capabilities that support a range of computing applications. The use of external peripherals with microprocessors means that components can be upgraded easily - for example, a user might replace their RAM chip to benefit from additional memory. 

Programmable microcontrollers contain all of the components of a microcomputer system on a single chip that runs at low power and performs a dedicated operation. Microcontrollers are most commonly used in embedded systems applications where devices are expected to execute basic functions reliably and without human interference for extended periods of time.

Three Key Differences Between Microcontrollers and Microprocessors

Cost

Generally speaking, microcontrollers tend to cost less than microprocessors. Microprocessors are typically manufactured for use with more expensive devices that will leverage external peripherals to drive performance. They are also significantly more complex, as they are meant to perform a variety of computational tasks while microcontrollers usually perform a dedicated function. This is another reason why microprocessors require a robust external memory source - to support more complex computational tasks.

With a microcontroller, engineers write and compile the code intended for the specific application and upload it into the microcontroller, which internally houses all of the necessary computing features and components to execute the code. Due to their narrow individual applications, microcontrollers frequently require less memory, less computing power, and less overall complexity than microprocessors, hence the lower cost.

Speed

When it comes to overall clock speed, there is a significant difference between industry-leading microprocessor chips and high-quality microcontrollers. This relates back to the idea that microcontrollers are meant to handle a specific task or application, while a microprocessor is meant for more complex, robust, and unpredictable computing tasks. 

One of the key design advantages associated with microcontrollers is that they can be optimized to run the code for a specific task. That means using just the right amount of speed and power to get the job done - no more and no less. As a result, many microprocessors are clocking speeds of up to 4 GHz while microcontrollers can operate with much lower speeds of 200 MHz or less.

At the same time, the close proximity of on-chip components can help microcontrollers perform functions quickly despite their slower clock speed. Microprocessors can sometimes operate more slowly because of their dependence on communicating with external peripherals.

Power Consumption

One of the key advantages associated with microcontrollers is their low power consumption. A computer processor that performs a dedicated task requires less speed, and therefore less power, than a processor with robust computational capacity. Power consumption plays an important role in implementation design: a processor that consumes a lot of power may need to be plugged in or supported by an external power supply, whereas a processor that consumes limited power could be powered for a long time by just a small battery. 

For tasks that require low computational power, it can be much more cost effective to implement a microcontroller versus a microprocessor that consumes much more power for the same output.

Embedded Systems and Microcontrollers

Microcontrollers include many features that make them suitable for application in embedded systems:

  • They are self-contained, including all of the necessary peripherals on a single integrated circuit chip
  • They are designed to run a single dedicated application
  • They can be optimized (software and hardware) for a single dedicated application
  • They exhibit low power consumption and may include power-saving features, making them ideal for applications that require the processor to function for long periods of time without human interference
  • They are relatively inexpensive when compared to CPUs, mainly because the entire system exists on a single chip

While microprocessors may be more powerful, that additional power comes at a cost that makes microprocessors less desirable for embedded systems applications: larger size, more power consumption, and greater cost.

Summary

Ultimately, microcontrollers and microprocessors are different ways of organizing and optimizing a computing system based on a CPU. While a microcontroller puts the CPU and all peripherals onto the same chip, a microprocessor houses a more powerful CPU on a single chip that connects to external peripherals. Microcontrollers are optimized to perform a dedicated low-power application - ideal for embedded systems - while microprocessors are more useful for general computing applications that require more complex and versatile computing operations.

If you're an embedded systems engineer working on a new project with programmable microcontrollers, Total Phase has the tools that work for you and your embedded systems. From host adapters to protocol analyzers, we can help you save time and energy while debugging your product and reduce your overall time to market.

Any questions? Send them our way! You can reach out to us at sales@totalphase.com.

Basics of Embedded C Programming: Introduction, Structure, Examples

$
0
0

C is a general-purpose programming language with a range of desirable features and rich applications in computing. With its origins in the assembly language, the C language includes constructs that can be efficiently mapped on to typical machine instructions, making the language useful for coding operating systems and many types of application software. 

Embedded C Language Basics

The c language, like so many other brilliant creations in computer electronics, has its origins at Bell Labs. In 1972, American computer scientists Dennis Ritchie and Ken Thompson were building the first iteration of the Unix operating system using assembly language and a 16-bit minicomputer known as the PDP-7. Thompson wished to design a programming language that could be used to write utilities for the new platform. With collaboration from Ritchie, and after several rounds of iteration and improvement, they created the C language along with a C compiler and utilities that were released with Version 2 Unix. 

C continued to develop over the years, with new language features being implemented to address changing application requirements:

  • 1978: The first C standard was published under the name K&R C, paying homage to its founders Thompson and Ritchie
  • 1989/90: The American National Standards Institute (ANSI) began to formalize a standard for the specification of the C language in 1983, releasing their finalized standard in 1989. In 1990, the ANSI standard was adopted by the International Organization for Standardization (ISO) and published in the document ISO/IEC 9899:1990. 
  • 1999: The C standard underwent further revision in the late 1990s, resulting in the publication of ISO/IEC 9899:1999. The name "C99" is commonly used to refer to this revision of the C standard. C99 introduced new data types and other features like variable-length arrays and flexible arrayed members.
  • 2011: Generic macros, anonymous structures and multi-threading were among the new features specified in the 2011 revision to the C standard.
  • 2018: The latest and most current revision to the C standard was released to provide technical corrections and additional clarification regarding aspects that were left unclear in the C11 revision. The current C programming standard and specifications can be viewed in the document ISO/IEC 9899:2018.

Embedded C is a set of language extensions for the C language that makes it more suitable for embedded applications. A technical report detailing these language extensions was released in 2004, with a revised edition of the same report released in 2016. 

As a low-level programming language, Embedded C gives developers more hands-on control over factors like memory management. This is useful for programming embedded microcontrollers that frequently face power usage and memory constraints. Embedded C also introduces the in_port and out_port functions for I/O, along with features like fixed-point arithmetic, hardware addressing, and multiple memory areas.

Embedded C Programming Preparing to write code for your next embedded systems project? It's time to open your eyes to what Embedded C has to offer.

Image courtesy of Unsplash

Introduction to Embedded C Programming

Standard C data types take on new characteristics in Embedded C to deal with the requirements of restricted, resource-limited embedded computing environments. Below, we review the characteristics of the data types available for engineers coding microcontroller systems in Embedded C.

Data Types in Embedded C

1. Function Data Types

Embedded C programs can deal with both functions and parameters. The function data type determines the type of value that can be returned by a given subroutine. Without a specified return type, any function returns a value of the integer type as its result. Embedded C also supports parameter data types that indicate values that should be passed into a specified function. When a function is declared without any parameters, or when a return value is not expected, the function can be noted as (void)

2. Integer Data Types

Embedded C supports three different data types for integers: int, short, and long. On 8-bit architectures, the default size of int values is typically set to 16 bits but Embedded C allows for int sizes to be switched between 8 and 16 bits to reduce memory consumption. The short int data type allows embedded engineers to specify and integer value that is just one or two bytes in size. Using the long int data type allocates twice the standard amount of memory as the int data type - usually 16 bits on most 8-bit platforms.

Bit Data Types

Embedded C uses two types of variables for bit-sized quantities: bit and bits. A bit value corresponds to a single independent bit while the bits variable data is used to collectively manage and address a structure of 8 bits. Programmers can assign a byte value to a bits variable to allow for the addressing of individual bits within the data.

Real Numbers

One of the most salient differences between desktop computer applications and those used to power embedded systems is their treatment of real numbers and floating-point data types. These data types include any quantity that can represent a distance along a line, including negative numbers and numbers with digits on both sides of a decimal point. Real numbers place a significant burden on the computing and memory storage capacities of embedded systems, and thus, these types of data are among the least frequently implemented. 

In addition to the conventional data types, there are also complex data types that can be useful for programming microcontrollers for embedded systems applications. These include types like pointers, arrays, enumerated types (finite sets of named values) unions, structures (ways of structuring other data), and more. 

Structure of Embedded C Program

With these data types in mind, let's take a look at the structure of a program in Embedded C. 

Documentation/Commentary

Effective coding requires the use of documentation or commentary to indicate any important details of what the code is doing. An Embedded C program typically begins with some documentation information like the name of the file, the author, the date that the code was created, and any specific details about the functioning of the code. Embedded C supports single-line comments that begin with the characters "//" or multi-line comments that begin with "/*" and end with "*/" on a subsequent line.

Preprocessor Directives 

Preprocessor directives are not normal code statements. They are lines of code that begin with the character "#" and appear in Embedded C programming before the main function. At runtime, the compiler looks for preprocessor directives within the code and resolves them completely before resolving any of the functions within the code itself. Some preprocessor directives can skip part of the main code based on certain conditions, while others may ask the preprocessor to replace the directive with information from a separate file before executing the main code or to behave differently based on the hardware resources available. Many types of preprocessor directives are available in the Embedded C language.

Global Variable Declaration

Global declarations happen before the main function of the source code. Engineers can declare global variables that may be called by the main program or any additional functions or sub-programs within the code. Engineers may also define functions here that will be accessible anywhere in the code.

Main Program

The main part of the program begins with main(). If the main function is expected to return an integer value, we would write int main(). If no return is expected, convention dictates that we should write void main(void).

  • Declaration of local variables - Unlike global variables, these ones can only be called by the function in which they are declared.
  • Initializing variables/devices - A portion of code that includes instructions for initializating variables, I/O ports, devices, function registers, and anything else needed for the program to execute
  • Program body - Includes the functions, structures, and operations needed to do something useful with our embedded system

Subprograms

An embedded C program file is not limited to a single function. Beyond the main() function, programmers can define additional functions that will execute following the main function when the code is compiled. 

Example of Embedded C Programming 

Now that we have reviewed the basic structure of programs in Embedded C, we can look at a basic example of an Embedded C program. In keeping with the structure outlined above, commentary on this code will be offered as part of the code structure itself, making use of the documentation conventions previously outlined for the Embedded C language.

/* This Embedded C program uses preprocessor directives, global variables declaration and a simple main function in a simple introductory application that turns a light on when specified conditions are satisfied */

#include <hc705c8.h>

#include <port.h>

 

/* The code imports header files that contain hardware specifications for the microcontroller system, including port addressing. The following ports are declared within the imported header file:  */

#pragma portrw PORTA @ 0x0A; 

#pragma portw DDRA @ 0x8A; 

/*In the first directive, the name PORTA refers to the data register belonging to Port A, while the suffix "rw" added to "port" indicates both read and write access. In the second directive, the suffix "w" indicates write access only to the DDRA, which is Port A's data direction register. */

#define ON 1 

#define OFF 0 

#define PUSHED 1

/* ON, OFF and PUSHED are the global variables for this program. They can be called by the main function along with any subsequent functions included by the programmer. */

void wait(registera);  

/* A wait function is needed for this program. We didn't code one but you can imagine how it would work. A wait function allows programmers to introduce a time delay between programmed events. */

void main(void){

// Beginning the main function with the expectation of no return

    DDRA = IIIIIII0;

/* We have already allowed this program to have "write" access to the DDRA port. Here, we use the assignment to set the DDRA pins to the desired configuration. In this configuration, pin 0 is set to output and pin 1 is set to input. The rest of the pins do not matter for this application. */

    while (1){ 

        if (PORTA.1 == PUSHED){ 

            wait(1);                                               

            if (PORTA.1 == PUSHED){ 

                PORTA.0 = ON; 

                wait(10); 

                PORTA.0 = OFF; 

            } 

        } 

    } 

}

/* This basic function checks the value of PORTA.1 to determine whether it is pushed (has a value of 1). If so, the program waits for 1 second and does the check again. If the second check succeeds, the program turns on an LED light by activating PORTA.0, keeps it on for 10 seconds, then shuts the light off again. */

Summary

Total Phase supports the efforts of embedded systems engineers building systems with the Embedded C programming language. In addition to our knowledge base and video resources, Total Phase provides industry-leading testing and development tools for use in embedded systems applications. Whether your device uses I2C, SPI, CAN, or USB, Total Phase offers the debugging and analysis tools you need to streamline product testing and reduce your time-to-market.

Request a Demo 

How Do I Identify the Cause and Resolve PHY Errors on a USB 3.0 Camera Device?

$
0
0

Question from the Customer:

I am using the Beagle USB 5000 v2 SuperSpeed Protocol Analyzer - Standard Edition with the Data Center Software to resolve a USB 3.0 issue with a Basler industrial camera device. The Beagle USB 5000 v2 analyzer reports PHY Errors for all USB 3.0 packets. I have tried other slave devices (cameras, mobile phones), different cables, and different hosts (ARM and Intel), but the results remain the same - PHY Errors and no enumeration on the host.

A summary of the attempts I have made so far:

  • Change SS Frontend Settings: change drive strength to 990mV, equalization, and pre-emphasis in different combinations
  • Change devices, cables, and host

The setup I am using:

  • The Data Center Software is running on Mac OS.
  • The host systems run different versions of Linux.

I am not sure how to proceed. If you have any suggestions, I would really appreciate it.

Response from Technical Support:

Thanks for your question! There are various scenarios that can cause PHY errors. In one scenario, PHY errors are “normal” while the link is being established during “training”. If the link has been established, there are specific bus events that can cause such errors to occur.

Why PHY Errors Occur During Training

The Beagle USB 5000 v2 SuperSpeed analyzer has an active front-end circuitry between the USB 3.0 host and USB 3.0 device. This front-end re-transmits the signals between the host and device as the signals pass through the analyzer. When the link comes out of an idle state, it is not uncommon for there to be some number of PHY errors while the link is being established. Once the link is established, PHY errors should no longer occur.

How USB 3.0 Training Works

The USB 3.0 training includes the TSEQ, TS1, and TS2 sequences.

During the training period, each link sends training sequences. TSEQ, TS1, and TS2 sequences are sent both upstream and downstream. Here is a summary of the training sequence:

  1. The link first sends multiple TSEQ. Then the link sends TS1. After the link gets a specific number of clean TS1, the link starts to send TS2.
  2. The training period begins when one link sends TSEQ, and concludes when both links send their last TS2.

When PHY Errors Occur During Training

It is normal for PHY errors to occur during the training period, as the transceiver clock (in the transmitter link) and the receiver clock (in the receiver link) are different. During the training period, the receiver clock is compared to the transmitter clock; the receiver clock is not yet trained to the transmitter clock.

  • If the receiver clock is faster than the transmitter clock, then the receiver link adds sub symbols.
  • If the receiver clock is slower than the transmitter clock, then the receiver link misses symbols.

After the training period is finished (after the last TS2 is transmitted), PHY errors are no longer expected.

When the Beagle USB 5000 v2 analyzer detects a small number of PHY errors during training period, those errors are not marked in red. However, if significantly more number of PHY errors are detected during the training period than expected, then the PHY errors are marked in red.

The link is normally trained in 1-2 us.

  • The time that it takes for the upstream link to be trained is measured from the time that the first TS1 of both links appear until the downstream link of the first TS2.
  • The time that it takes for the upstream link to be trained is measured from the time that the first TS1 of both links appear until the downstream link of the first TS2.

Communication During Training

The Low Frequency Periodic Signaling (LFPS) is used by USB ports to communicate across a link that is under training, in warm reset, or in a low power state. The LFPS type is determined by the burst and repeat times of a signal, as well as the LTSSM state.

Specific Events that Can Cause PHY Errors

Sometimes, the PHY error is a special-case bus event that will match the following errors:

  • Disparity Error
  • Elastic Buffer under-run or over-run
  • 8b/10b Decode Error

While the PHY Error collapses these errors into a single match, you can distinguish some of the errors in the captured data. Here is what you can look for:

  • When an elastic buffer under-run error occurs, an EDB symbol (K28.3) is inserted into the data stream to fill the under-run.
  • When an 8b/10b Decode Error occurs, a SUB symbol (K28.4) is substituted in place of the bad 10b symbol in the data stream.

Other Causes of PHY Errors

PHY errors also occur due to electrical inconsistencies. For your setup, we suggest inserting a self-powered SuperSpeed hub between your Beagle USB 5000 v2 analyzer and your target device and the host. This could resolve the problem, if electrical inconsistencies are the cause.

Another possibility is using hardware filters in Complex Matching to restrict the amount of data stored in capture buffer. To see an example of using this feature, take a look at this video:

For more information about this setup, please refer to the section USB Device Settings in the Data Center Software User Manual.

We hope this answers your questions. Additional resources that you may find helpful include the following:

If you want more information, feel free to contact us with your questions, or request a demo that applies to your application.

An Introduction to Real-Time Embedded Systems

$
0
0

One of the earliest decision points in embedded systems design is whether the system will require real-time computing capabilities. Real-time computing describes the ability to react to inputs and deliver the prescribed output within a constrained time frame. Devices that use real-time computing are deployed in applications where their correct functioning can make the difference between life and death.

As an example, consider the airbag in a conventional family sedan. When the vehicle stops abruptly in the case of a collision, the airbag must be deployed in a split second to be effective for passengers. This means that the embedded microcontroller which controls the airbags must detect that a collision is happening and electronically trigger the release of vehicle airbags - all in just a fraction of a second. This capability is made possible by the technology of real-time computing.

In this introduction to real-time embedded systems, we'll give a high-level overview of what these unique embedded systems are, how they're designed and classified, and why their functionality is so critical in real world applications. We'll also offer some real-time embedded systems examples.

What is a Real-Time Embedded System?

A real-time embedded system combines the technologies of embedded systems and real-time computing. To achieve the most complete and accurate description, we begin with a deeper look at the defining features of these technologies.

Embedded Systems

Embedded systems are hardware-and-software computer systems that perform a dedicated function with a larger system or device. An embedded system typically consists of a microcontroller, also called a computer-on-a-chip. Microcontrollers are equipped with a CPU, memory (RAM and ROM), I/O ports, a communication bus, timers/counters, and DAC/ADC converters. 

Embedded systems have three defining characteristics that embedded systems engineers should be aware of:

  1. Embedded systems are application-specific. While a general-purpose computer could run any compatible application of the user's choosing, an embedded device is programmed and optimized to run one specific application that satisfies its real world function.
  2. Embedded systems do not always have a user interface. A general-purpose computer incorporates a user interface where a user can input instructions or otherwise interact with the system.  An embedded system is often hidden inside a device such that the user does not interact directly with the embedded system itself. Embedded systems typically receive input from sensors or a connected data source instead of directly from the user.
  3. Embedded systems are hardware and software. An embedded device consists of a software application that delivers a specific function or service, along with the necessary hardware to run the application in the live environment. The core challenge of embedded systems design is to create a product that solves the problem while meeting strategic and business requirements for product size, power consumption, and unit cost.

Real-Time Computing

Real-time computing describes the capability of a computing system to respond to a given input within a tightly constrained time frame. In the context of embedded systems, engineers implement real-time computing by installing a special type of operating system onto the embedded device. Operating systems can be conceptualized as the bridge between embedded hardware and software.  There are two basic types for embedded engineers to choose from:

  1. General Purpose Operating System (GPOS) - A GPOS is the software layer that sits between the hardware and the application in an embedded system. GPOS consists of the kernel, memory management, networking, and other services that are provided to the application. A GPOS is used in cases where tasks are not time-sensitive and computing power is valued more highly than rapid response times.
  2. Real-Time Operating System (RTOS) -  An RTOS is used for embedded systems applications that are time-sensitive or time-critical. A time-critical task is defined as one where the task must be performed within specified time constraints to avoid negatively impacting users. In a time-critical system, the value of completing a task is linked to its timeliness and tasks that are completed past the deadline may have a negative value. RTOS includes a task scheduler component whose goal is to ensure that critical tasks meet their deadline, even when it means sacrificing other areas of performance.

Real-time embedded systems are those that incorporate a real-time operating system, ensuring that the device can respond to sensory inputs within the time constraints specified by the embedded software. Real-time embedded systems are further classified based on the type of real-time response they provide.

Real-Time Embedded Systems Air bag deployment depends on the rapid response time of a real-time embedded system with Hard RTOS.

3 Classifications for Real-Time Embedded Systems

Real-time embedded systems combine the functionality of a real-time operating system with a microcontroller (hardware) and unique application (software) to solve a business problem. There are three types of RTOS that differ in function based on the time constraints associated with their application.

Hard RTOS - A hard RTOS is implemented when it is crucial that no deadlines are missed and all tasks are completed within the prescribed time frame. In a hard RTOS, delays in the system are strictly time-bound to ensure that deadlines are met at a 100% rate and any missed deadline is considered a system failure.

Firm RTOS - In a firm RTOS, errors are occasionally permissible but there is an understanding that missed deadlines result in degraded performance of the device. A device using a firm RTOS may occasionally miss a deadline, but the application can recover as long as failures are relatively infrequent. 

Soft RTOS - In a soft RTOS, user experience is optimized when tasks are completed on-time but performance is not totally degraded when deadlines are missed. Consider a video game console that runs a game engine: it must schedule tasks and complete them on time for the game to run smoothly, but a little bit of lag or an occasional hiccup in performance does not necessarily ruin the experience for the player. 

Real-Time Embedded Systems Design Patterns

A design pattern describes a repeatable solution to a problem that commonly occurs when designing a specific type of device. The pattern acts as a description of how an engineer can solve a specific problem, a framework taken from a solution to a similar problem. Design patterns help embedded systems engineers avoid reinventing the wheel as they develop their products, limiting their total debug time and reducing overall time-to-market. 

The following design patterns are useful for engineers building real-time embedded systems:

Object Design Patterns

  • Manager Design Pattern - The manager object can be implemented to keep track of multiple entities in embedded systems applications where the system must support several entities of the same or similar type.
  • Resource Manager Pattern - This design pattern can be used to implement a centralized resource manager for multiple resources of the same type.
  • Half Call Design Pattern - This design pattern is used for implementations that require interactions between more than one communication protocol. 

Protocol Design Patterns

  • Protocol Stack Design Pattern - This design pattern can be used to implement a layered protocol and provides for dynamic insertion and removal of protocol layers within the stack.
  • Protocol Layer Design Pattern - This design pattern is used to decouple protocol layers and reduce dependencies between layers of the protocol stack.
  • Protocol Packet Design Pattern - This design pattern offers a simplified buffering architecture for real-time embedded systems, implementing a single buffer that supports addition and extraction of the various protocol layers.

Architecture Design Patterns

  • Processor Architecture Patterns- There are many possible architectures for real-time embedded systems that have been documented as a design pattern. Each architecture design pattern specifies its own processes and modules along with the corresponding roles and responsibilities. Some available options include:
    • Operations & Maintenance Processor Architecture
    • Central Manager Architecture
    • Module Manager Architecture
    • Device Controller Architecture
  • Feature Coordination Pattern - In real-time embedded systems design, each task should include a feature coordinator. Feature coordination ensures that a feature does not fail to complete a result of packet loss or task failure. Feature coordination also helps embedded systems recover after a request time-out.
  • Timer Management Design Patterns - Timer management is a key feature of real-time embedded systems. Timer management design patterns are used frequently to address the requirements of real-time embedded devices. They include patterns for failure detection, message loss and fault recovery sequences, inactivity detection, sequencing operations, and other features that ensure tasks are completed by the specified deadline.

Real-Time Embedded Systems Examples

Let's review a few short examples of real-time embedded systems - one from each of the classifications outlined above.

We've already mentioned vehicle airbags, which you may have identified as an example of a hard real-time embedded system. Remember the criteria for hard real-time: the value of the task is zero or negative when the deadline is missed. In this case, a missed deadline means that the airbag was not deployed in time to protect passengers in the collision.

A cardiac pacemaker could be an example of a soft real-time embedded system. Pace-makers control the heartbeat by sending electronic pulses to the heart when the attached electrical nodes detect cardiac arrhythmia in the patient. The electrical pulses help the patient's heart return to a normal beating pattern. While pace-makers provide a necessary and life-saving function, they can function effectively even while missing the occasional task deadline.

A manufacturing assembly line with robotic components might require a firm real-time embedded system. Imagine a machine that performs a simple task like sealing a toothpaste tube. If a task misses a deadline, the process fails and a single tube of toothpaste might be ruined - but as long as this happens infrequently and does not cause a major disruption, it is not considered a system failure.

Summary

Real-time embedded systems offer the predictability and performance that help embedded systems engineers build products that tackle real-world problems. 

At Total Phase, we build the products that embedded systems developers need to change the world - development kits to help get you started, host adapters to support the product development process, and protocol analyzers to reduce debug time and help get your product to market even faster. 

We're also supporting embedded systems development through our knowledge base where we offer tips and advice on real-time embedded programming and a host of other topics.

Learn More About Our Products

Viewing all 822 articles
Browse latest View live