Quantcast
Channel: Total Phase Blog
Viewing all 822 articles
Browse latest View live

How to Identify the I2C Slave Addressing Standard for Developers

$
0
0

It is common for embedded device developers and engineers to have questions about what slave address is needed to communicate with a given I2C (Inter-Integrated Circuit) device. There is no one-size-fits-all answer to this question. Some I2C slave devices use 7-bit addressing, some use 8-bit addressing, some use 10-bit addressing, and there are also reserved addresses.

In this piece, we will explain what I2C slave addressing is, why it is important for developers, how to identify different I2C slave addressing formats, and provide some information on where you can learn more or find tools related to I2C development and debugging. We’ll also provide some pro-tips for working with Total Phase devices and I2C addresses throughout this article.

What is I2C Slave Addressing?

I2C slave addressing is an important part of enabling communication between I2C master device(s) and slave device(s). The process for an I2C master device to read from an I2C slave device or write to an I2C slave device begins with the I2C master sending a START condition followed by the I2C address of the target slave. I2C slave addressing is a specific data format by which I2C slave devices can be uniquely identified to enable them for data transmission.

What does I2C Slave Address mean for developers?

At this point, you may be asking “what kind of impact does I2C slave address have for developers?”, and that is a very valid question. The answer is simple: in order to properly interface with an I2C slave device, you need to know the I2C address scheme. If you do not know the correct I2C address, you’re going to have difficulty getting an I2C master and I2C slave device to communicate and your I2C development and I2C debugging efforts can hit a roadblock fast.

Why the confusion about I2C Slave Addressing?

The root cause of much of the confusion surrounding I2C slave addressing is the use of 7-bit, 8-bit, and 10-bit addresses. The I2C specification published by NXP Semiconductors only calls for two I2C address types for Standard Mode I2C: 7-bit and 10-bit. 7-bit was used first and 10-bit was added as an extension. 8-bit I2C address schemes also exist because some vendors include the read/write bit. Additionally, there are a select number of reserved addresses used for specific purposes.

All this comes together to create a market where knowing the right address byte for an I2C slave device isn’t always easy or intuitive. In the section below, we’ll help you identify and understand each of the different I2C address types. For a deeper dive on the I2C specification, check out the I2C-bus Specification and User Manual from NXP.

What are the different types of I2C Slave addressing?

As mentioned above, there are 7-bit, 8-bit, and 10-bit I2C address types as well as a set of I2C address blocks that are reserved. Here are examples and tips for identifying and working with each type.

7-bit I2C Slave Address

A 7-bit I2C address includes a 7-bit slave address in the first 7 bits of a byte. The eighth bit (the bit in the Least Significant Bit position) is the read/write flag. A 0 in the eighth bit indicates a write and a 1 in the eighth bit signifies a read. See the image below for an example:

Pro-tip: Consistent with the I2C address specification, all Total Phase I2C products follow the standard 7-bit I2C address format for I2C slave devices. When using the Aardvark I2C/SPI Host Adapter, the associated software can automatically correct the read/write bit based on the type of transaction that is being carried out. With the Beagle I2C/SPI Protocol Analyzer, the slave address and read/write transaction type are listed in two discrete columns.

8-bit I2C Slave Address

As mentioned, 8-bit addresses are not in the official I2C specification. Instead of including the entire address and read/write a bit in one address byte, they use 8-bits for the I2C address of the slave device and 1 bit for the read/write flag. You can see an example of this in the image below:

Pro-tip: When using Total Phase products, only use the first 7 bits as the slave address.

A simple means of checking if a specific I2C slave device is using an 8-bit address is to check the range of the address. A 7-bit address should always fall between 0x07 (7) and 0x78 (120). Generally, if your address is outside of this range, the vendor of the I2C slave device has likely assigned an 8-bit I2C address.

10-bit I2C Slave Address

10-bit I2C addresses are compliant with the Standard Mode I2C specification and are compatible with 7-bit I2C address types. What this means is I2C slave devices with 7-bit and 10-bit I2C address types can be mixed on the same communication bus. 10-bit I2C address types can be identified by the special reserved address (more on reserved I2C addresses in the next section) that identifies it as a 10-bit address. A 10-bit I2C address for a slave device will always start with an 111 10XX indicator, making them easy to identify. The image below provides an example of this I2C address formatting:

It is important to note that the eighth bit of the 10-bit address remains as the read/write flag.

It is this formatting that enables 10-bit addresses to retain compatibility with 7-bit I2C addresses. The compatibility between 7-bit and 10-bit addresses was one of the driving factors in our decision to use 7-bit addressing for all of our I2C products. Using 7-bit addressing helps ensure 10-bit addresses are handled correctly as well.

Pro-tip: If an Aardvark I2C/SPI Host Adapter is used and a 10-bit address is specified, the software can automatically confirm the correct bits are sent without any special configuration from the user. The Beagle I2C/SPI Protocol Analyzer is also similarly capable of automatically detecting and displaying information from 10-bit slave address I2C devices correctly.

Reserved addresses

As mentioned above, the I2C spec calls out a set of reserved addresses that are used for specific purposes. The table below from the I2C-bus Specification and User Manual from NXP explain each of these reserved addresses that must not be used for any other purposes:

Summary

While the different types of I2C address formats can be confusing at first, understanding and identifying them is not difficult once you understand the formats. 7-bit addresses use the first 7-bits as the I2C slave address and the eighth bit as a read/write flag. 10-bit addresses use the first 7-bits for a special address indicator and also use the eighth bit for the read/write flag before sending the rest of the address in the next byte. 7-bit and 10-bit devices can be used on the same communications bus. 8-bit addresses use all 8-bits of a single byte for the slave address and then send the read/write flag in a subsequent byte. When using 8-bit I2C slave devices with Total Phase products, it is important to remember to only use the first 7-bits of the I2C address.

Looking for products to help with I2C development and debugging? Contact the experts!

Now that you are familiar with I2C slave address, you may be ready to dive into I2C development or debugging. Here at Total Phase, we offer a variety of products to help enable you to be efficient and effective in your efforts. For example, theBeagle I2C/SPI Protocol Analyzer is useful for engineers in lab environments or in the field and supports the SPI (Serial Peripheral Interface) and MDIO (Management Data Input/Output) protocols in addition to I2C. Alternatively, the Aardvark I2C/SPI Host Adapter is a fast and powerful adapter that connects via USB and is compatible with Windows, Mac OS X, and Linux.  To help you compare and select the right product for your projects, check out our I2C/SPI Product Guideor if you need expert assistance to select the right tools, contact us today!


How Can I Increase Speed to Get the Maximum I2C Bitrate?

$
0
0

Question from the Customer:

I am using the Promira Serial Platform with the I2C Active - Level 1 Application. Using the Control Center Serial Software, I set the bitrate to 1MHz in I2C Control menu, but when I measured the I2C frequency with an oscilloscope, I see that the actual frequency is around 800KHz, not 1MHz. Are there other settings I can use to increase the speed of the bitrate?

Response from Technical Support:

Thanks for your question! There are two ways you can accelerate the bitrate. You can use the Promira Software API and connect the Promira platform via Ethernet. Depending on the hardware version of the Promira platform, using pull-up resistors may also help increase the I2C frequency.

Using Software API Reduces GUI Latencies

GUI applications, including Control Center Serial Software, have both operating system (OS) and graphical user interface (GUI) latencies, which affect the bitrate. To bypass the GUI latencies, we recommend using Promira Software API I2C/SPI Active. This API is compatible with multiple OS (Windows, Linux, and Mac) and supports multiple languages (C, Python, Visual Basic, and C#). Software examples are provided that can be used as is or modified for your specifications. For more information, please refer to the API Documentation section of the Promira Serial Platform I2C/SPI Active User Manual.

Ethernet Connectivity Improves Speed

Latencies occur when over USB when delivering I2C or SPI data: round-trip delays occur. When the Promira platform is connected via Ethernet, the speed is increased.

When using Ethernet connectivity, you can provide power to the Promira platform via a USB 2.0 / 3.0 A-micro B cable, or an external power adapter. A 5V, 1.2A adapter is provided in the Promira Ethernet Kit. Following are instructions on how to connect the Promira platform via Ethernet.

How to Connect Promira via Ethernet

The Control Center Serial Software is used for setting up the Ethernet connection.

  1. Connect the Promira platform to the computer via the Ethernet cable and USB A-to-Micro-B cable as illustrated below.
    1. For instructions, please refer to the Software section of the Promira Serial Platform System User Manual.Promira platform to the computer via the Ethernet cable
  2. The Promira platform on Ethernet-over-USB IP (10.x.x.x) is detected on the Configure Adapter dialog window as shown below.Windows detects Promira
  3. Set the Promira IP.
    1. In the Configure Adapter dialog, connect the Promira platform with the available IP: USB over Ethernet.
  4. After connection is achieved, the port is displayed on the status bar (at the bottom of the screen) as shown below.Success of Connectivity Shown in Dialog Window
  5. Select Adapter -> Network Preferences. By default the IP Address is 192.168.11.1 and the Subnet Mask is 255.255.255.0.
  6. In the Network Preferences dialog window, configure the Promira platform network preferences.
    1. The IP Address can be any value for a peer-to-peer setting.
    2. Use an IP address does not conflict with other devices on the network.
  7. Click the Apply button.
  8. After the dialog window shows the configured IP, click the OK button.
  9. Set the IP address of the PC LAN network adapter.
    1. Choose the Ethernet adapter to which the Promira platform is connected on the system.
    2. Disable the IPv6 check box and manually assign the IP address.
  10. Verify the configurations are correct:
    1. The Red Cross mark on the bottom left corner of the network adapter (to which Promira platform is connected) icon should disappear from the screen.
    2. Click the OK button.
      Confirm the Promira IP configurations
  11. Close the settings, and then re-open the Control Center Serial Software.
    1. After disconnecting and re-connecting the Promira platform, separate ports will appear for Ethernet and Ethernet-Over-USB.
  12. Select the port with Ethernet IP.
    1. The Promira platform is now set up and ready to use over the Ethernet.
      Select the Ethernet IP Port

Pull-up Resistors and Rise-Time

Depending on the hardware version of the Promira platform, using external pull-ups may improve the speed.

Promira Hardware Versions 1.01 and 1.5

For hardware versions 1.5 and 1.01, the frequencies set is closer to the actual frequency because the pull-ups are “stronger”.

Promira Hardware Versions 1.7 and 2.1

For hardware versions 1.7 and 2.10, the internal pull-ups are 2.2K Ohms. In this case, the signal rise times are higher, which affects the clock period.

If you are using a 3.3V signal level, we recommend disabling the internal pull-up, and using an external pull-up of 500 Ohms.

  • As a master device, a Promira platform of hardware version 1.7 and above has 2.2k ohms pull-up resistors.
  • Using external 500 Ohm resistors connected in parallel may improve the speed; decreasing the rise-time of the signal affects the speed. For more information, about I2C speed limitations, please refer to the section Known I2C Limitations in the Promira Serial Platform I2C/SPI Active User Manual.

We hope this answers your question. Additional resources that you may find helpful include the following:

If you have more questions about our Total Phase products, feel free to contact us at sales@totalphase.com.

 

Total Phase at NI Week 2019 – Full Force Ahead

$
0
0

Last week, Total Phase exhibited at NI Week in Austin, TX. The theme was ‘Full Force Ahead,’ and there were many references to Star Wars.

The first day of NI Week saw a lot of traffic through our booth at the exhibition hall. We had engaging conversations with engineers, not just from Texas but also around the country and from abroad.

Total Phase Display at NI Week

I2C, SPI, CAN and USB Tools

We had two demonstrations running at our display. We had our Promira Serial Platform and Beagle I2C/SPI Protocol Analyzer on one end with the rest of our I2C/SPI tools, and the Komodo CAN Duo Interface on the other end. In between we had our assortment of USB tools. The Advanced Cable Tester v2 was released last week and we also had it on display!

How Total Phase Supports National Instruments

Previous customers took the time to stop by and express how happy they were with our products. Many engineers that stopped by, however, had not heard of Total Phase and were excited to learn about our products. Conversations revolved around I2C, SPI, and CAN. The Promira Serial Platform was a hot topic among the attendees. Since this was a National Instruments event, a lot of customers asked about how we work with LabVIEW. We offer LabVIEW drivers for all of our tools with support for LabVIEW 2017 and above.

NI Week was a productive show for Total Phase. We look forward to further engagement with our customers. You can find Total Phase next at NXP Connects in San Jose, California and Microchip MASTERs in Phoenix, Arizona.

If you are working with I2C, SPI, eSPI, USB, CAN, or A2B and have debugging or development needs, please reach out at sales@totalphase.com. We would be happy to learn more about your application and help support your needs!

Using a Beagle USB Protocol Analyzer, How Do I Trigger and Capture VBUS Measurements and USB Data?

$
0
0

Question from the Customer:

We are working to reverse engineer a mobile phone. We analyze the USB traffic to figure out what the tool does on the device. However, the tool that we are using often “realizes” that we’re analyzing USB traffic and “halts” – it stops us from working on this project.

In addition to the data traffic, we also need to track the USB VBUS voltage and current draw. Which of your Total Phase tools do you recommend for this project?

Response from Technical Support:

Thank you for your question! For your usage, we recommend the Beagle USB 480 Power Protocol Analyzer - Ultimate Edition, which supports the following features:

  • USB 2.0 Advanced Triggering Capabilities
  • Create state-based and flexible trigger conditions based on data patterns, packet types, error types, events, and other criteria
  • Hardware packet filtering
  • Up to eight independent states and six matches per state for USB 2.0 captures
  • Track current drawn from the USB bus and VBUS voltage
Beagle USB 480 Power Protocol Analzyer - Ultimate Edition

Capture and View Data with Complex Matching

Here is a video that shows you how to set up and use Complex Matching. For more information, please refer to the sections Triggering a Capture, Capture Control and Device Settings in the Data Center Software User Manual.

 

 

More information about capturing data is available in this article, Using a Beagle USB Protocol Analyzer, How Do I Trigger and Capture Data? Additional information is provided in the article, including how to work with the Digital I/O and the Hardware Filter.

Trigger and Capture Current and Voltage Readings

The Beagle USB 480 Power Protocol Analyzer - Ultimate Edition can be used to track the USB VBUS voltage and the current draw by the target device. Using the Beagle USB 480 Power analyzer and Data Center Software together supports Complex Matching. This feature can be used to monitor VBUS voltage or current.

  • The pre-set threshold can be included in any state of the Complex Matching state machine and each state can vary the edge(s) of the threshold it detects. This feature is effective for complex debugging and optimizing the power consumption profile of target devices.
  • You can configure the analyzer to trigger on the rising and/or falling edge(s) of a pre-set voltage or current threshold of VBUS; it can trigger on a rise or drop in VBUS current or voltage.

Example of Monitoring Current and Voltage

To see an example of monitoring current and voltage readings, take a look at the Data Center Software.

  1. Install and open Data Center Software.
  2. Press the F4 key on your keyboard. The Example Captures dialog window will open.
  3. In that dialog window, scroll down and select the PD trace file usb480-fs-power.tdc. Click the OK
  4. On the Menu bar, click the option Current/Voltage monitor.
  5. The Current/Voltage monitor graph will be displayed in the left and lower corner of the Data Center Window.
View of Current Voltage graph in Data Center Software

For more information, please refer to the Current/Voltage Monitoring sub-section in the Data Center Software User Manual.

We hope this answers your questions. Additional resources that you may find helpful include the following:

Do you have more questions? Please send us an email at sales@totalphase.com. You can also request a demo that applies to your application.

Total Phase at ESC Boston 2019

$
0
0

Being new to Total Phase, whenever the opportunity to travel comes my way, I do my very best to take it! This time, that opportunity took me all the way across the United States to Boston for the Embedded Systems Conference. I had never been to Boston or the ESC show, so to say the least, I was pretty excited.

Total Phase at ESC Boston 2019

The Beagle I2C/SPI Protocol Analyzer Leads the Pack

The show lasted just two days, so as you might assume, it flew by! There seemed to be a healthy amount of people coming by to check out our tools. The tool that got the most interest seemed to be our Beagle I2C/SPI Protocol Analyzer. People love (love may be an understatement) the ability to monitor the I2C and SPI bus in true, real time. New and existing customers alike visited our booth to tell us how amazing our analyzers are. They appreciate how much time and energy our tools save them in the debug phases of development. Below are some of the reasons why our Beagle I2C/SPI analyzer gets so much attention:

  • Non-intrusively monitor I2C up to 4 MHz
  • Non-intrusively monitor SPI up to 24 MHz
  • Non-intrusively monitor MDIO up to 2.5 MHz
  • Real-Time Data Capture and Display - Watch I2C, and SPI packets as they occur on the bus.
  • Bit-level timing down to 20 ns resolution
  • Fully Windows, Linux, and Mac OS X compatible
  • Includes full function monitoring tools
  • Low Cost (just $330)

See How the I2C Adapter and Analyzer Work for You

If you would like to see the demo that got people so excited about our Beagle I2C/SPI Protocol Analyzer, check out the demo video, Using an I2C Host Adapter and I2C Protocol Analyzer to Prototype Embedded Systems.

We believe in keeping it simple and straight forward. We know that no one wants to spend long nights figuring out how to use a tool so, we offer a truly “plug and play” solution. Don’t believe it, check out this video where we show how you can literally set up and start a capture in under 90 seconds!

It is always fun to be able to meet new and existing customers. Being the East Coast Technical Sales Representative, I don’t often get to meet our customers face-to-face, but trade shows change that. I love getting the opportunity to meet and educate people on the many benefits of our tools. Our tools are simple and, again, truly easy-to-use. People really appreciate that Total Phase is offering real solutions for real problems without breaking the bank or requiring hours of time of learning. Just plug in and start debugging!

More Tools - More Capabilities

Total Phase offers a range of different development tools, from protocol analyzers and host adapters, and now offering a cable testing solution. We support I2C/SPI/USB/CAN/eSPI/A2B and cable testing. If you work with any of these protocols, and are not already familiar with our tools, I would strongly suggest you reach out and give us a call. We more than likely have a solution that you have been looking for for far too long.

To everyone that I got to meet in Boston, I wish you all well and look forward to working with you soon. To everyone else, please don’t hesitate to reach out and contact us. We are more than excited to help each and every one of you in whatever way we possibly can. Give us a call at (408) 850-6501 or email us at sales@totalphase.com.

Quality Testing Beyond HDMI Cable Certification

$
0
0

HDMI, or High Definition Multimedia Interface, was first conceptualized as a medium for transferring high definition video and audio signals over a single cable. Over the years, this interface has become widely adopted and now dozens of cable manufacturers and electronics companies develop their own versions of HDMI products that are sold to millions of consumers daily.

HDMI.org is the official HDMI organization comprised of multiple companies including Maxell, Ltd., Koninklijke Philips Electronics N.V., Lattice Semiconductor Corporation, Panasonic Corporation, Sony Corporation, Technicolor S.A. (formerly known as Thomson) and Toshiba Corporation, which conceptualized and continue to oversee the development of HDMI. This organization developed the concept of creating a single cable that was able to transfer uncompressed high-definition video, multi-channel audio, and data in a single digital interface. This organization includes the HDMI Licensing Administration, which administers the HDMI Compliance Test Specification to various HDMI adopters. To get this license, adopters must undergo a set of compliance tests, where if passing, their product can be recognized and labeled as HDMI certified.

 

Obtaining HDMI Certification

In order for HDMI cables to become certified, adopters first must perform specific self-tests on a representative sample of their cables. The HDMI Compliance Test Specification specifies the required testing procedures that must be performed, as well as the minimum requirements necessary to meet the standard. Secondly, once developed, HDMI adopters must submit their first product of each licensed product type, whether this be a source, sink, repeater, or cable to an HDMI Authorized Testing Center. Once their licensed product passes this initial inspection, adopters are not required to submit any further samples of the same product type.

One of HDMI’s cable certification program, called the Premium HDMI Cable Certification Program, performs various tests on cables incorporating newer HDMI 2.0 technology. Cables tested in this program are verified to ensure they fully support the 18 Gbps bandwidth from HDMI 2.0b, as well as an electromagnetic interference (EMI) noise test to ensure cables minimize interference with wireless signals. “Premium High-Speed HDMI Cables “or “Premium High-Speed HDMI Cables with Ethernet” on their products.

 

 Premium HDMI Cable Certification allows adopters to place a logo on HDMI product, confirming compliance to end users.
Photo Courtesy of HDMI.org

While these compliance tests determine if the HDMI product meets the relevant spec, they do not go further in depth to determine the performance of the licensed product. These tests provide the minimum requirements to meet compliance, and while they do catch design errors, they do not always determine conformation to High-Definition Multimedia Interfaces or successful interoperability with other HDMI products.

 

Testing Beyond Certification

There are many accounts of HDMI cables acquired by consumers exhibiting image or audio transfer issues, whether it be cheap, expensive, or even HDMI certified cables. Many HDMI users have experienced fuzzy, discolored picture, intermittent image and sound, or even no sound at all. This leads us to the question: is cable certification enough to determine the quality of all HDMI cables mass-produced thereafter? With all the variables involved in manufacturing, it is still crucial for manufacturers to continue to test and oversee their products or there could be repercussions.

Even after cable certification is granted to HDMI cable manufacturers, safety and quality testing during and after production should be an integral part of the manufacturing process. When manufacturing cables, including HDMI, cable companies will often use human intervention to assemble the more intricate components of these cables as complete automation is not feasible. In these cases, companies should expect the unexpected as this added level of variability is impossible to eliminate and increases the margin for error exponentially.

Currently, cable manufacturers may use their own versions of production line testing, like testing for functionality or HDMI lock, but these tests do not always look below the surface to ensure the internal infrastructure is correctly assembled and operating as expected according to specification or even as advertised. Even if a cable passes these inspections, it does not automatically correlate to a quality cable. Some also go further to implement statistical process control, which performs a test on randomly selected cables and then determines the failure rate probability within all cables produced using statistical calculation. While this method is adequate, this type of testing still neglects a large amount of cables allowing bad ones to pass through.

Until now, testing each and every cable during production has not been feasible for cable manufacturers. There has not been a single machine that performs a quick, affordable and comprehensive analysis on cables – until now. With the Advanced Cable Tester v2, cable manufacturers can now perform

individual quality control.

 

Advanced Cable Tester v2 Supports Testing HDMI

The Advanced Cable Tester v2 supports the analysis of HDMI-A to HDMI-A cables. HDMI cables are tested for pin continuity including shorts and opens with a dynamic visualization of the test results. Manufactures can even test continuity of individual or grouped HDMI pins. It also measures the DC resistance with milliohm precision on ground and power wires and ohm resolution on most other wires. Finally, a signal integrity test of data lines up to 12.8 Gbps per channel is performed, accompanied by an eye-diagram with a mask per cable specification providing a visualization of the signal quality.

This tester allows for users to evaluate the insertion loss within the cable, which is a common concern with the more recent HDMI specifications, and can greatly affect the overall cable performance.

Using the HDMI-A to HDMI-A Connector Module, HDMI adopters can perform complete tests on their own HDMI cables.

This tester is designed for a variety of environments, allowing developers to pre-test cables in the lab or perform quality testing on cables during production. This next generation cable tester focuses on supporting mass-scale production, where the cost and time taken to perform tests have dramatically improved. Now, anyone at any skill level can determine the validity of a cable using the Advanced Cable Tester v2. This tester includes an LCD screen that displays a pass or fail result with each cable insertion. For a more in-depth analysis, testers can review on spot or remotely using Ethernet for a more detailed report of each individual test. Because each subtest flags errors not within spec, it is easy to discover the flawed portion of the cable.

To learn more about how the Advanced Cable Tester v2 supports HDMI testing, please visit our website.

How Can I Set a Timeout to Minimize OS Latency when Interfacing on the CAN Bus?

$
0
0

Question from the Customer:

I am using the Komodo CAN Duo Interface with Komodo Software API. My understanding is that the timeout of the command km_latency() should be set to less than the timeout value used in the command km_timeout(). However, setting the value to KM_TIMEOUT_IMMEDIATE, or 1ms – I don’t see how it is possible to fulfill this timing requirement.

My questions:

  • What is the correct usage of latency when using KM_TIMEOUT_IMMEDIATE for km_timeout()?
  • What are the interactions with timeout?

Response from Technical Support:

Thanks for your questions! The following sections, show the recommended methods for your setup, and describe the relationship of the operating system (OS) latency and timeout.

Setting Latency for the Desired Timeout

We have two recommendations for setting the latency value for your setup.

Set the Latency to 0

When the latency is set to 0 (zero), the function sends the appropriate value for the firmware to terminate the USB Request Blocks (URB) for each CAN packet to be sent back to the OS. This method ensures the smallest latency possible.

Set the Latency Relative to Measured Time

Measure the time it takes for your CAN packet read in km_can_read() to execute, and then set the latency for less than that measured time.

The Relationship of the Operating System and Timeout

There is an inherit latency that occurs when managing USB traffic as asynchronous URBs on the host PC. This latency occurs because the OS only sees data when an URB is either entirely filled, or the USB packet is short (terminated before the URB size is reached).

How the Timeout Command Works

Here is an example of how timeout works.

In this scenario, the USB Request Blocks (URB) are sized to 100 bytes, and each CAN packet only consumes 20 bytes of USB data.

In this case, you would need to see five packets to get any USB data back to the OS. However, seeing five packets may take some undefined period of time. For a controlled latency, the Komodo interface issues a short packet after the latency timeout has elapsed. This allows the data to become visible to the OS, and to the API script.

Timeout Values

Here is some information about the effects of timeout values:

  • When timeout_ms is set to the value KM_TIMEOUT_IMMEDIATE, timeout_ms becomes non-blocking and immediately returns
  • When timeout_ms is set to the value KM_TIMEOUT_INFINITE, then km_can_read is blocking until some data is received.

For more information, please refer to the API Documentation chapter of the Komodo CAN Interface User Manual

Additional resources that you may find helpful include the following:

We hope this answers your questions. If you have other questions about our CAN interfaces or other Total Phase products, feel free to email us at sales@totalphase.com. You can also request a demo that applies to your application.

5 Reasons Why We Should Not Be Scared of IoT Security

$
0
0

The Internet of Things (IoT) has been revolutionizing industries and is continuing to grow at an exponential rate. With all the interconnectivity, security is naturally a growing concern. Will the personal information stored on your Apple Watch get hacked? What about the bank information you have stored in a digital wallet? Hackers are smart. They are constantly looking for new ways to infiltrate networks and commit fraud against unassuming internet users.

While end users of IoT devices want to know why they should feel safe using IoT devices, it is important for developers and engineers that design these products to understand how to design and develop secure IoT devices that meet consumer demands. In this piece, we’ll dive into some reasons consumers can feel reassured about IoT security and provide some pro-tips to developers and engineers to enable them to create more secure IoT devices.

Example devices of Internet of Things

 

Protection against Cybercrime

Protection against cybercrime is at the core of IoT security. Cybersecurity is a constant game of cat and mouse where device architects and hackers seek to one-up each other. Savvy consumers want to know their devices are secure and modern security protocols and standards are being implemented on their devices. Secure access methods like HTTPS & SSH and compliance with standards help win consumer trust.

What can IoT developers do to protect against cybercrime?

While there is no one-size-fits-all answer to this question, there are a number of practices that can help improve the security of IoT devices and meet market demands. These include:

  • Enforce secure password policies by default. Insecure passwords are low-hanging fruit for hackers. When possible, create devices that prompt or force the user to create a secure password and encourage your users to never use default passwords in production environments.
  • Enable the use of secure network communications. Network connectivity is at the heart of IoT. However, the network is also an exposed medium for malicious attackers to attempt to compromise devices. Only use encrypted network communications protocols like HTTPS and SSH whenever possible.
  • Built-in firewalls & intrusion detection. IoT devices are prime targets for attackers. For this reason, IoT developers now need to begin to implement more robust software security measures. Embedded firewalls can restrict access to only allow specific network ports and protocols and intrusion detection can help identify anomalous behavior and flag threats.
  • Use a secure boot. The secure boot process is vital to optimizing the security of an embedded device. Secure boot ensures that a given piece of hardware only authenticates code that was created using a specific set of credentials. This prevents hackers from installing a different operating system on an embedded system and using that to compromise user data.

Remember, IoT devices are effectively embedded devices that are network enabled. This means that following embedded device security best practices can go a long way in making your IoT product robust and secure.

There Is a Regulatory Baseline for Security

industry-accepted standards exists yet, Congress has set forth rules and regulations to keep the IoT in check. The IoT Cybersecurity Improvement Act has introduced in 2017 as well as the Developing Innovation and Growing the Internet of Things Act. The latter still has to be approved by the House.

This being said, it’s a known fact that connected devices and the IoT are not leaving any time soon. The world of “smart” and intelligent computing is here to stay, and if it’s going to open so many up to vulnerability, it may as well be regulated.

In 2018, Congress passed the State of Modern Application, Research, and Trends of IoT Act, which requests the Department of Commerce to study the IoT industry and recommend ways to promote the secure growth of IoT products and devices.

In September 2018, California enacted a law that calls for IoT devices sold in the country to come with security requirements.

How can IoT developers ensure they meet industry and regulatory standards?

Staying abreast of the latest developments in legislation related to IoT is vital to remaining competitive. The industry is growing rapidly and data privacy and security laws are struggling to keep up. This creates an environment where the legal and regulatory aspects of IoT can change quickly. For this reason, it is important for developers of IoT devices to stay up to date with the latest on IoT legislation and remain ahead of the curve when it comes to implementing best practices. To that end, you can start by familiarizing yourself with the State Of Modern Application, Research, And Trends Of IoT Act to see where legislators are headed.

Continuous Software Updates

End users understand that vulnerabilities arise and security patches and firmware updates are a part of using smart devices. In fact, organizations that release patches to vulnerabilities are generally viewed more positively than those that do not from a security perspective. Updates are generally released for products to implement new features, fix bugs uncovered by debugging or user reports, and/or to address security vulnerabilities.

How can IoT developers develop sound software update processes?
Whenever practical, offer to automate the update process for users. Automatic patches are more likely to be promptly applied. When users opt-out of automatic updates or if they are otherwise impractical, make sure to have a well-defined means of informing users of your latest updates.

Growing Consumer Knowledge

Popularity usually spikes research or at least some quest for knowledge about certain things. When it comes to the cloud, people want to know: where does my information go? Why is it stored in a “cloud?” Is it easier for hackers to get my personal information in the cloud? This has an upside as the more knowledgeable people are about the cloud and IoT security, the stronger their tendency to keep their commonly used IoT devices updated for fear that they might get hacked.

How can IoT developers and engineers benefit from growing consumer knowledge?

By taking security seriously, writing good documentation, and enabling easy patching of security vulnerabilities. As mentioned previously, people expect to patch or update smart devices. To ensure you are viewed as a company that values the security of your customers and takes IoT security seriously, your documentation and patch processes need to be user-friendly and coherent.

At best, users will find convoluted or unclear patch processes and documentation difficult to follow, at worst they will lead to those patches being overlooked and lead to security vulnerabilities. In either case, the reputation of the device manufacturer can be damaged. As mentioned above, if you can automate patch processes for the user, this is even better as it shifts the burden off of them and makes the process seamless.

Cybersecurity Is a Top Priority for Organizations

Before everything was connected and not every industry was finding ways to adapt their business method and make it connected, cybersecurity was not at the forefront of many manufacturers’ minds. They wanted to sell products. Now, 55 percent of organizations rank IoT security as a top priority. They know that if their customers feel safe using their products, devices, or services, that they will retain those customers while gaining new ones.

How can IoT developers prioritize IoT security?

One of the most important ways of prioritizing IoT security is to have stringent test methods that emphasize security before releasing a new product or patch to production. Additionally, making sure that your products only use signed drivers is an excellent way to help demonstrate your dedication to security. Further, if your digital signature is ever compromised, be sure to inform your users immediately and update your digital signature.

Conclusion: IoT Can Be Becure

While the idea of storing all your personal information in connected devices might seem like a scary idea for some consumers, the security practices developers are putting into place help make for a safe cloud network. It’s nice to be able to rely on your devices having built-in security features or reminding you to update your system, but you must stay on top of those updates and ensure that your software is functioning properly.

To ensure a safe and connected future, IoT security must be taken seriously by both the manufacturer and the end-user. If you are an IoT engineer or developer looking for industry-leading tools to aid in the design, monitoring, or debugging of embedded systems, check out our product offering today! You can also request a demo that is specific for your application.

Request a Demo


How Do I Sync the Clocks from Two USB Data Captures using Beagle USB Analyzers?

$
0
0

Synchronized Clocks

Question from the Customer:

I am using two USB protocol analyzers, Beagle USB 12 Protocol Analyzer and Beagle USB 480 Protocol Analyzer. It looks like the clocks are out of sync between the two analyzers. My device receives data on one USB port, processes the data, and sends it on the other USB port. That the clocks are not synchronized makes it difficult to analyze the data - it looks like the device is sending data before it receives data to process.

Is there a way to synchronize the clocks between two Data Center Software sessions? Are there other analyzers I should consider?

Response from Technical Support:

Thanks for your question! The multiple point analysis of a USB system is extremely helpful for many development scenarios. A common example is the hub, where traffic is sent between the hub and the host, as well as between the hub and the device. In this scenario, clock drift prevents an accurate correlation of captured data from two different analyzers.

Our Solution for Synchronizing Clocks

Synchronizing capture events, start, trigger, and stop, on multiple analyzers can be difficult to accomplish. This feature is not supported by using the Beagle USB 480 analyzer or the Beagle USB 12 analyzer; however, this can be achieved by using a pair of the Beagle USB 5000 v2 Protocol Analyzer - USB 2.0 Edition with the Beagle USB 5000 v2 Protocol Analyzer - Multi-Analyzer Synchronization Option.

How the Beagle Cross-Analyzer Sync Option Works

 Beagle USB 5000 v2 Protocol Analyzer - Multi-Analyzer Synchronization Option solves the challenges that are posed by multiple point analysis. With this option, you can easily and reliably monitor both sides of a USB hub, or any number of points in a USB system. Two HDMI ports on the back of the Beagle USB 5000 v2 analyzer allow two or more analyzers to synchronize their capture timestamps, as well as their capture start, capture trigger, and capture stop events.

 

Diagram of Hub with two Beagles Protocols and USB devices set up for synchronized data captures.

The Cross-Analyzer Sync Option is automatically enabled when two or more Beagle USB 5000 v2 analyzers, with this option licensed, are connected via their back panel HDMI ports. For more information about this feature, please refer to the section Cross-Analyzer Sync in the Beagle Protocol Analyzer User Manual.

For an example of using this feature, please refer to our knowledge base  Hub Latency Calculation using Beagle 5000 Analyzer.   This article describes how to set up the hub, the target device, target host, and analysis computer with two Beagle 5000 v2 analyzers. The example uses a USB 3.0 4-Port SIIG hub, and a SuperSpeed flash stick. The instructions can be modified for other devices.

We hope this answers your questions. Additional resources that you may find helpful include the following:

More questions? You can send us your questions via sales@totalphase.com. You can also request a demo that applies to your application.

Request a Demo

What You Must Know About These 5 Serial Communication Protocols

$
0
0

Serial communication in the world of telecommunications is the sequential transfer of data one bit at a time over a communication channel or communication bus. Usually, this transfer of information happens between two or more components of embedded systems. In comparison, parallel communication sends several bits of information as a unit all at the same time.

Advantages of  Serial Communication

While parallel communication might seem simpler, serial communication is more economical in terms of long-distance communication, or cable channels that span long distances. Transmission speeds and signal integrity are also becoming stronger making serial communication a better option today even when it comes to communication across shorter distances.

Although it would seem parallel communication should be faster than serial communication, which would make it seem like the optimal choice, the reality is that serial communication requires fewer cables to transmit the data and there is less crosstalk, undesired signal glitches happening from one channel to the next. When there are fewer conductors — and there are fewer conductors present when serial communication is used — there are fewer inconsistent signals. The end result is serial communication generally offers superior performance and speed.

Serial Communication Port

Why is serial communication necessary? To function properly and keep your world connected, embedded systems need to be highly functioning and continuously interpreting, cycling, updating, and sharing information. When a microcontroller in an embedded system contains a particular data set, it is stored in parallel form but converts to serial data when it is transferred to the output buffer. When receiving data, the microcontroller on the other end can convert the information back to a parallel form.

In order to keep your ever-modernizing world connected at the speed you’re accustomed to, serial communication protocols have been created to process digital information securely, efficiently and smartly. In this piece, we will dive into five serial communication protocols you should know in order to stay up to date on how embedded systems are communicating second by second, day by day.

CAN Protocol

The Controller Area Network (CAN) protocol was created with the intent to minimize and downsize communication processes in automobiles. Robert Bosch, the developer of the CAN protocol, recognized the need to make intelligent vehicles more affordable and practical.

Cars in the late 1970s were just beginning to see changes such as anti-lock braking systems, air conditioning, central door locks, airbags, gear control, and management systems. All of these advanced features meant more complex designs and heavier-duty mechanical parts that were more expensive than ever before. To keep expenses low and simplify the in-vehicle wiring and machinery, he implemented CAN.

What CAN Supports

Engineers producing new vehicles could now manage the electronic subsystem in cars with a single cable. The microcontrollers and devices within the network could now communicate with each other without a host computer. It is message-based and now used in various manufactured applications such as cars, trucks, and buses — both gasoline powered and electric — as well as aviation equipment, automated machinery, elevators, and medical equipment.

Total Phase offers two types of CAN products: the Komodo Solo CAN Interface and the Komodo Duo CAN Interface.

I2C Protocol

Think about switching your tablet screen to night reader mode and the way the colors invert to make reading easier on your eyes. Now consider when you move from a dark room to a brightly lit area and how you might need to boost the brightness on your phone screen in order to read a text. These applications are using the I2C protocol.

The Inter-Integrated Circuits (I2C) protocol has been around for three decades but is still widely used in embedded systems today. Designed by Philips Semiconductor (now NXP Semiconductors), I2C attaches lower-speed integrated circuits to processors and microcontrollers used in short distance communication. Engineers can thus connect multiple slave devices to one or more master devices on the same printer circuit board.

What I2C Supports

This protocol is more often used when simplicity and economy are more important factors than speed. Some applications that implement the I2C protocol include: real-time clocks, extended display identification data for computer monitors via VGA or HDMI, volume settings in embedded speakers, and small LCD displays.

Unlike the CAN protocol, two wires are used for data transmittal in I2C. You should note that I2C uses pull-up resistors, which physically interrupt component connection within switches and transistors. They are, however, flexible when it comes to speed and functionality.

Total Phase offers three I2C products: the Aardvark I2C/SPI Host Adapter, the Beagle I2C/SPI Protocol Analyzer and the Promira Serial Platform.

SPI Protocol

Similar to I2C, the Serial Peripheral Interface (SPI) protocol is used for short distance communication. The difference with SPI is that it only implements one master and it needs four wires to communicate. One or more slaves can be supported through SPI by adding additional additional slave select lines. Motorola developed the interface in the mid-1980s and it has become a common synchronous serial communication protocol since then.

The four signals transferred over four wires include Master Out Slave In (MOSI), Master In Slave Out (MISO), Serial Clock (SCK) and Slave Select (SS). SPI interfaces can support data speeds over 100 MHz, versus I2C, which supports up to 3.4 MHz.

What SPI Supports

The SPI protocol is used mainly in SD cards, LCD interfaces, reading data from real-time clocks, and applying for communication with temperature, pressure sensors, and video game controllers.

Total Phase has four SPI products for you: the Aardvark I2C/SPI Host Adapter, the Beagle I2C/SPI Protocol Analyzer, the Cheetah SPI Host Adapter, and the Promira Serial Platform.

eSPI Protocol

The Enhanced Serial Peripheral Bus Interface (eSPI) protocol was designed by Intel to reduce the number of pins used on the motherboard. It downsized and simplified a number of other processes at the same time. The eSPI protocol reduces the voltage necessary to facilitate smaller chip manufacturing processes, providing greater available throughput than the previous LPC bus. The eSPI bus can also be shared with SPI devices.

What eSPI Supports

Before eSPI, the Low Pin Count bus (LPC) was used for IBM-compatible personal computers. It allowed low-bandwidth devices such as CD ROMs to connect and transmit data in a multiplexed four-bit wide bus. Now, the eSPI protocol is used mostly where real-time flash sharing is required.

Total Phase has one eSPI product for you: the eSPI Analysis Application for the Promira Serial Platform.

USB Protocol

The Universal Serial Bus (USB) protocol, created in the 1990s, is one of the most popular means of connectivity for modern devices. Nowadays, USB drives can charge phones, connect phones to printers, connect hard drives to laptops, or even connect disk drives to laptops in order to watch a DVD you own that might not be on Netflix just yet. In many cases, USB has replaced older means of communication like RS-232 and SCSI. There are a variety of different USB connection types. In addition to the Standard-A and Standard-B connections found on many peripherals, micro-USB, mini-USB, and the reversible USB Type-C connector are also popular.

USB 2.0 (a.k.a. “High-speed” USB), which was released in 2000, is the most commonly used USB standard. Over the past few years, USB 3.0 (SuperSpeed USB) and 3.1 (SuperSpeed+ USB) have been growing in popularity. USB 3.2 was released in August 2017 and is only beginning to be adopted by consumers and businesses.

From backward compatibility perspective, USB 2.0 devices can be used in a USB 3.x port. Similarly, a USB 3.0 device can be connected to a USB 2.0 port. However, in both cases, you would not achieve USB 3.x speeds and would be limited to the 480 Mbps speeds of USB 2.0

What USB Supports

From a communications perspective, up to 127 USB devices can communicate on a single USB bus. In addition to data communications, USB connectors can also charge devices. USB 1.x and USB 2.0 used 4 wires. Two for communication (Data- & Data+) and two for power (VBUS & Ground). USB 3.x increases the complexity and feature set by increasing the wire count to nine. For a deeper dive on USB protocols, including USB 2.0 & 3.0 signaling, USB enumeration, USB descriptors, & USB architecture, check out our USB Background article.

Total Phase offers multiple USB products: the Beagle USB 12 Protocol Analyzer, the Beagle USB 480 Protocol Analyzer, the Beagle USB 5000 v2 Protocol Analyzer, along with the USB Power Delivery Analyzer for PD analysis and the Advanced Cable Tester v2 for cable testing.

Summary

Modern serial communication protocols are necessary when it comes to streamlined communication between embedded devices. The more complex connected devices and the Internet of Things seem to become, the more protocols there are making sure that the transfer of data is seamless and fast enough to keep you running at a high speed.

No matter what type of business you do, Total Phase is here to support your data transfer from device to device. We can help you choose the best product for the type of work you do - you can contact us by email or request a demo.

I Need to Program 129 Bytes to an I2C EEPROM – How Should I Use Your Example Programs for My Chip?

$
0
0

Question from the Customer:

I’m working on automating the programming of a Microchip 24LC128 I2C EEPROM. I’m using the Aardvark I2C/SPI Host Adapter with the Aardvark Software API, specifically Python. Looking at aai2c_eeprom.py, how do I specify the start location for writing? The slave address is 0x50.

What would be the correct syntax for the command? I think it would look like this:

aai2c_eeprom.py 0 400 write 0x50 2 4

But so far, I haven’t been able to get the program to work with my chip. Any suggestions?

Response from Technical Support:

Thanks for you question! The functional examples that Total Phase provides with the API software package are to read, program, and erase two specific devices: AT25080A SPI EEPROM and AT24C02 I2C EEPROM, which are installed on our I2C/SPI Activity Board.

Our API examples can be used as a baseline for building the code that you need for your chipsets. As specifications vary per chip design, these example programs may not work for other devices, but the programs can be modified as needed.

Overview of EEPROM Code

aai2c_eeprom.py is used to read, write, and erase an EEPROM. Here are the usages:

  • Read: aai2c_eeprom PORT BITRATE read SLAVE_ADDR OFFSET LENGTH
  • Write: aai2c_eeprom PORT BITRATE write SLAVE_ADDR OFFSET LENGTH
  • Erase: aai2c_eeprom PORT BITRATE zero SLAVE_ADDR OFFSET LENGTH
    In this case, erasing data is implemented by writing “0” in the specified length of bytes.

Adapting the Example to Your Chipset

Looking at the datasheets, the memory of the 24LC128 EEPROM is larger than the memory of the AT24C02 EEPROM, which affects the memory address range. To work with your device, the parameters related to the address range should be modified in the example code.

How to Specify the Start Location

The start location can be specified in the OFFSET argument of the command line function. To read only a single memory location, specify the LENGTH as 1.

For more information about API commands, please refer to the API Documentation chapter of the Aardvark I2C/SPI Host Adapter User Manual.

Alternate Method - Batch Scripts

In addition to API programs, you can use XML batch files with the Control Center Serial Software. We regularly update batch scripts for production devices. As you know, new products are constantly released and prototypes are tested before release – we may not have exactly what is needed. However, like the API files, the XML files can be easily modified, and then run with the Control Center Serial Software. For an example of using batch files, take a look at this video:

 

 

Additional resources that you may find helpful include the following:

We hope this answers your question. Need more information? You can contact us and request a demo that applies to your application, as well as ask about our Total Phase products.

Total Phase Makes its NXP Connects Debut in 2019!

$
0
0

This past week, Total Phase exhibited at the NXP Connects conference at the Santa Clara Convention Center.

The conference was attended by extremely knowledgeable test engineering professionals from various local companies and across the nation. The event was focused on how to leverage smart technologies into new innovations including driverless cars, home automation, and industrial IoT.

Featured at the Total Phase Booth were demonstrations of our Promira Serial Platform coupled with the Beagle I2C/SPI Protocol Analyzer. Additional products such as the Beagle USB 480 Power Protocol Analyzer, Komodo CAN Duo Interface, Aardvark I2C/SPI Host Adapter, and Cheetah SPI Host Adapter were also on display.

Total Phase at NXP 2019

Many happy existing customers made time to chat with us with most conversations focused on around challenges with I2C, SPI, and CAN debugging.

If you are finding hurdles with effectively debugging in common serial protocols such as I2C, SPI, USB, CAN, A2B, or eSPI, or you are one of many companies who are concerned with the safety and reliability of next generation USB, Lightning, and HDMI Cables, then please contact us at sales@totalphase.com – we’d love to help. You can also request a demo that is specific to your application.

Request a Demo

My USB Type-C Cable Works as Expected, but the Advanced Cable Tester Shows Rp Errors – What Can You Tell Me about these Test Results?

$
0
0

Question from the Customer:

I am testing a USB 2.0 Standard-A to USB Type-C cable and I keep seeing the following error in the Advanced Cable Tester report:

  • Legacy Rp + Rd Down Check
Advanced Cable Tester Report showing resistor errors Advanced Cable Tester Report

What does this error mean and what causes it? Based on our design specifications, I believe this cable works properly, but this error keeps coming up.

Response from Technical Support:

Thanks for your question! The Advanced Cable Tester has built-in test profiles that are based on USB-IF compliance. Our Advanced Cable Tester has several preset profiles, and for special needs, supports custom profiles. One of the support cable types is USB Type-C to USB 3.1 Standard-A cables.

According to the USB specification, this cable type is required to have an Rp resistor. Based on the test results you showed us, your cable appears to have an Rd resistor instead. This is why you are seeing the test failure. If your cable intentionally does not adhere to the USB specification, custom test profiles can be created.

USB Type-C Cable Compliance and Standard Test Results

The termination resistors Rp and Rd and their switches are required by the USB specification. A cable may work correctly in one scenario, but if it does not adhere to the USB-IF standards, the standard test profiles will show error results. There are two examples:

  • Rp and Rd resistors are required for connection detection, plug orientation detection, and for establishing the USB source/sink roles.
  • Poor values against these parameters guarantee failure of the cable in optimal scenarios.

Other Considerations about Rp Test Results

There are other scenarios that could results in test errors. For example, the cable itself is not within the specification; the Rp may be on the wrong line of the cable.

  • For compliance, Rp must be on the same side as DPDM1.
  • If Rp is on the wrong side, even if the Rp value is within rage, its presence in the wrong place will result in an error: the sanity tests failed.

Additional resources that you may find helpful include the following:

If you have more questions about the Advanced Cable Tester v2 or other Total Phase products, please contact us at sales@totalphase.com. You can also request a demo that applies to your application

Request a Demo

Behind the Scenes of Video Game Consoles: Embedded Systems

$
0
0

Video Console

Original Photo by Stas Knop

Video gaming has come a long way from its humble beginnings. From what was a rudimentary system with simple graphics and joystick controllers quickly became much more evolved, and now, the video games we know of today are extraordinarily advanced.

Embedded Components of a Video Game Console

Video game consoles are embedded systems, comprising of many components all serving a specific function, allowing the system to take input from the player and relay the outputs on a screen display. Present-day video game console systems generally consist of these embedded components:

  • User control interface
  • CPU
  • GPU
  • RAM
  • Operating System
  • Storage medium for games
  • Videooutput
  • Audio output

The Beginning of Home Entertainment Gaming

How do these components work together to form a video game console? Let’s get an idea of the foundation of video games to start:

Some of the first home entertainment video games were introduced in the 1970s and 1980s, but one of the most memorable video game consoles of that time, Atari Video Computer System (VCS), changed video game history with its incorporation of microprocessors within its infrastructure. Before this, video games relied on a core board with transistors and diodes.

Specifically, the Atari 2600 was accredited with popularizing the incorporation of microprocessors within consoles. For this console, the MOS 6502 microprocessor was used. The Atari 2600 also incorporated 128 bytes of RAM and 4-kilobyte ROM (read-only memory) chips loaded with software, and these ROM chips were stored in removable cartridges, allowing users to easily swap various games using the same hardware. The Atari VCS also comprised of a custom graphics chip called Stella, which allowed the system to sync with the television generating screen display and sound effects.

Game Cartridges

Photo by Kevin Bidwell

After the first Atari video game system was created, many other video game key players started being introduced into the market, including Nintendo, PlayStation, and Xbox. Within the last couple decades, video games have advanced exponentially, with each new release providing more power, enhanced graphics, quicker loading times, and all new ways to interact with games all because of the evolving embedded components within the system.

Video Gaming Today

While video game consoles made today all provide their unique features and vary in performance, each console comprises of a similar embedded foundation.

Each video game system incorporates a form of user control interface in order for the player to interact with the game. We’ve seen video game user control interfaces go from joysticks, to pad controllers, to wireless controllers, and we’re even beginning to be introduced to all new ways of interacting with video games, like with virtual reality headsets for example. And while the concept of pushing buttons or moving the controller to control the game may seem simple on the exterior, there is a lot happening behind the scenes.

Handheld video console for playing

Photo by Jaroslav Nymburský

Video game consoles rely on CPUs (central processing unit) to calculate various aspects of the game and control how the game responds to user input. It is essentially processing the game’s instructions and handles game logic in the form of movement or interaction with objects. The CPU is also very important as it passes information to the GPU, or graphics processing unit. The GPU is responsible for translating instructions taken by the CPU and rendering what is seen on the screen by controlling the formation of images in a frame buffer.

The GPU functions by utilizing the graphics memory or VRAM (video random access memory), which stores the video and image data, and subsequently determines how the objects within the game look to the viewer.

The RAM within the system is vital to the overall interworkings of the system because it stores the game data that the CPU uses to make its calculations.

Games within the last decade are often stored using CD-ROM or DVD-ROM, a major upgrade from the past era of replaceable cartridges. These discs, especially the DVD-ROM or Blu-ray DVD drives are able to store a higher quantity of video game software data. We also see current systems using SSD cards for saving games and personal data.

Video games consoles today also provide a video signal that allows them to be hooked up to a screen display, and depending on the type of television and HDMI cable used, the quality of the image and video from the video game can be affected.

How Does HDMI Affect Gaming?

Certain video game consoles like Xbox 360 and PlayStation 3 are compatible with HDMI cables, where Xbox’s most recent release, the Xbox One X is compatible with the most recent HDMI specification, HDMI 2.1. This recent HDMI 2.1 spec improves on video game compatibility in terms of adding new features that enhance video quality and definition. HDMI 2.1 supports 8K resolution, and adds Variable Refresh Rate, which helps lessen lag that gamers experience with slower refresh rates. Quick Frame Transport is also incorporated to reduce latency, and the Enhanced Audio Return Channel, eARC, results in higher quality audio outputs not seen before.

Total Phase’s Advanced Cable Tester v2 provides a testing solution for all HDMI cables including HDMI 2.1 and below. This cable tester ensures quality of the cable by assessing the pin continuity for any shorts/opens/routings, DC resistance for most wires, and signal integrity testing up to 12.8 Gbps. With the Advanced Cable Tester v2, HDMI cable manufacturers can go beyond just cable certification and can now verify cable quality and HDMI lock across cables and devices in a matter of seconds.

For more information on this tool, please visit the Advanced Cable Tester v2 or contact us at sales@totalphase.com. You can also request a demo specific to your application.

Request a Demo

Total Phase Exhibits at the New Embedded Tech Expo in San Jose: SPI, I2C, and USB at Their Finest

$
0
0

This past week, Total Phase exhibited at the brand new Embedded Tech Expo at the San Jose Convention Center. This show was also coupled with the Sensors ’19 Expo, so there was a cross pollination of like-minded test engineering and development professionals. Hundreds of exhibitors and thousands of attendees walked the floor and were also treated to case study presentations at both the Embedded Tech and Sensors Expo theatres.

On display at the Total Phase booth were demonstrations of our Promira Serial Platform, Beagle USB 480 Power Protocol Analyzer, and for the first time at a major West Coast-based event, our Advanced Cable Tester v2, which received tremendous feedback! Comments ranged from “I wish I had this before,” to “This is Amazing.”

Total Phase Booth at New Embedded Tech Expo in San Jose 2019 Total Phase Exhibit

Many happy current and past customers made time to stop by our booth, but we also saw a healthy mix of individuals who hadn’t had as much exposure to Total Phase and our range of solutions. It was great getting the chance to educate fresh faces on the time-saving capabilities of our protocol analyzers and host adapters.

If you are finding challenges with effectively debugging common serial protocols such as I2C, SPI, USB, CAN, A2B, or eSPI, or you are one of many companies who are concerned with the safety and reliability of next generation USB cables, then please contact us at sales@totalphase.com, we’d love to help.


7 Reasons The Internet of Things (IoT) Benefits Healthcare

$
0
0

Digital transformation has changed the way the healthcare industry operates. The shift to the use of electronic health records (EHR) over the last decade-plus and the 2014 push for “meaningful use” of EMRs (electronic medical records) in the American Recovery and Reinvestment Act brought about the first major wave of changes. Now, with the rise of the Internet of Things (IoT) another sort of digital transformation is coming to the world of healthcare. Coupling IoT and healthcare will lead to huge leaps forward for healthcare organizations in the areas of healthcare operations, healthcare monitoring, costs, patient outcomes, and beyond.

In this article, we will discuss 7 reasons IoT benefits healthcare and provide some practical examples of use cases and applications of IoT for healthcare organizations.

IoT and Healthcare Benefit #1: Lower costs

From a business perspective, IoT will have a huge impact on costs in healthcare. For example, wearable technology will allow providers to capture data in real time and lead to fewer visits without compromising the quality of the data.  

Similarly, telehealth applications coupled with these wearable technologies can help bring quality healthcare for chronic health issues to remote and rural locations in a more affordable and effective manner. Patients with chronic health issues that were far aware from healthcare providers in the past had to deal with the high costs of multiple visits, and this can significantly impact the quality of care those patients receive. By coupling IoT and healthcare and adding telehealth to the equation, both providers and patients can benefit from lower costs.

IoT and Healthcare Benefit #2: Improved connectivity & communications

One of the major paradigm shifts that is resulting from combining IoT and healthcare is the ability to better automate communication and create interoperability between medical machines and between different healthcare organizations. In the past, connecting providers with the data they need could be an arduous process in the healthcare industry. With IoT and healthcare, machine-to-machine communication is possible and patient outcomes can be improved as a result.

To conceptualize the connectivity benefits of IoT in the healthcare industry, consider the use of smart sensors within a healthcare organization. Not only can the sensors communicate data to enable proactive preventative maintenance, but they can also help ensure nurses have access to patient care data in real time.

From an engineering standpoint, developers and engineers of IoT solutions must be capable of meeting these demands without inconveniencing healthcare staff or patients. Tips to do so include: enabling secure Wi-Fi connectivity, maximizing battery life, and minimizing power consumption & footprint.

Modern Health Care with IoT and AI (Image courtesy of Pixabay) IoT and Healthcare

 

IoT and Healthcare Benefit #3: Enhanced Information Quality

The quality of available information can make a world of difference to a healthcare organization. By creating a web of healthcare-related IoT sensors that capture data and coupling it with Artificial Intelligence (AI), a world of new possibilities opens up. In short, because more data is available, better diagnosis and treatment plans are possible. AI and predictive analytics allow healthcare organizations to take the wealth of data created by IoT in healthcare and turn it into actionable information.

IoT and Healthcare Benefit #4: Productivity Tracking

From an operations standpoint, IoT in healthcare organizations can help ensure that workflows are optimized and staff productivity is high. This is particularly important in large, complex medical facilities where hundreds or thousands of employees help make patient care possible. IoT devices can help determine if tests have been administered, ensure unauthorized access to restricted areas is not allowed (e.g. using smart cameras or smart wearables that grant access), and help with inventory control.

IoT and Healthcare Benefit #5: Improved Drug Management

While IoT can help enable improved drug management from an inventory control perspective and increase the speed by which a drug is available once it is prescribed, there are other use cases as well. For example, the Abilify MyCite pill helps confirm if a patient is actually taking their prescribed drugs. Additionally, by adding intelligence to drug delivery equipment and devices that measure vital statistics, providers can help patients within a facility get the medication they need faster and do a better job at monitoring the effectiveness.

IoT and Healthcare Benefit #6: Enabling Preventative Treatment

At its core, one of the main benefits of IoT is the wealth of data it generates. Coupled with AI and predictive analytics, this data can bring to light trends, correlations, and insights that may have otherwise remained hidden. By analyzing the data generated by wearables, smart hospital beds, blood pressure monitors, fitness trackers, and more, healthcare providers can become much more proactive in preventing health issues.

More data means more informed decisions and more precise diagnoses. It also means preventative treatment can be prescribed much earlier, which for many diseases can drastically improve patient health.

IoT and Healthcare Benefit #7: Improved Patient Outcomes

The final benefit we will discuss is the one that matters most: improved patient outcomes. More data, better communication, and enhanced analytic capacity will mean better healthcare outcomes for patients. With more granular data and real-time monitoring, healthcare organizations and providers can provide individualized assessments that are targeted to address highly specific issues with context.

Further, the preventative measures made possible by IoT in healthcare will help give patients a significantly better chance at avoiding or mitigating the severity of a variety of illnesses.  Similarly, IoT and the digital transformation occurring in the healthcare industry will allow patients to make more informed decisions about their own health.

What are Some Real-World Examples of IoT in Healthcare?

Examples of IoT in the healthcare industry that are enabling this sort of improved information quality include:

  • The CYCORE system that uses mobile and sensor technology to detect signs of dehydration and other symptoms related to radio therapy
  • Smart medical beds that are capable of communication with other devices (e.g. blood-pressure monitors) and are used to monitor patient vitals and adjust as needed.
  • The Abilify MyCite pill that can track if a patient swallowed a pill.
  • FloPatch is a smart bandage that detects blood flow.
  • AdhereTech’s smart pill bottles that help make prescription refills seamless.

Where does Total Phase Help Enable the Use of IoT in Healthcare?

Total Phase is committed to providing industry leading tools to enable the development and debugging of the embedded systems that make IoT possible. Most IoT devices are network-enabled embedded systems and our product line of host adapters and protocol analyzers can be used by engineers to develop secure and robust solutions.

For example, by using the latest diagnostic tools, developers can ensure their smart sensors are operating in an efficient manner that optimizes battery life by minimizing power consumption. Additionally, Total Phase products can help aid in the development of IoT solutions that have embedded systems security in mind to make sure they are capable of standing up to the stringent security demands in the healthcare sector.

Summary

At a high level, the benefits of IoT to the healthcare industry are: improved information quality, more efficient operations, and better patient outcomes. By leveraging IoT in their facilities, healthcare organizations can help ensure they are doing everything possible to provide the best quality of care they can. IoT enables insights and analytics that were previously out of reach and empowers patients and providers to make more informed healthcare decisions. Total Phase helps enable IoT in the healthcare industry by providing industry leading monitoring, debugging, and development tools for a variety of protocols used to develop the embedded systems that make up the Internet of Things.

To find a solution that is right for you, check out our product guide.  You can also request a demo that is specific for your application.

How Do I Use the Combined I2C Format for 7-Bit Addresses?

$
0
0

Question from the Customer:

I am working with the Aardvark I2C/SPI Host Adapter, and I need to apply the I2C Combined Format. Looking at the Aardvark I2C/SPI Host Adapter User Manual, I see Aardvark API Software is available for doing that, but for ease of use, I would prefer a GUI application.

Also, I don’t need 10-bit addresses for my project. The API commands seem to imply I need to do that – is there a way I can use 7-bit addresses instead?

Response from Technical Support:

Thanks for your question! For your application, you can use our  Control Center Serial Software, an easy-to-use GUI application that provides access to all the I2C and SPI functions of the Aardvark adapter, including Combined Format.

When using I2C protocols, including Combined Format, the 7-bit address is default. The 10-bit API command that you saw in the Aardvark User manual, AA_I2C_10_BIT_ADDR, only needs to be executed to work with 10-bit addresses.

Example of Combined I2C Format with 7-Bit Address

Here is an example of using the Combined Format with 7-bit addresses.

Combined Format with 7-bit addresses Combined Format with 7-bit addresses

Combined Format Details

  • Write 17 bytes with no stop bit to 7-bit Slave address 0x63 starting at offset 00.
  • Master Read 18 bytes with a stop bit.

The following settings to use (select the check box) in the Control Center Serial Software:

      1. In the Master dialog:
        • Uncheck 10-Bit Addr.
        • Check Combined FMT and No Stop.
        • Set the Slave Address to 0x63
      2. In the Master Register Read section:
      3. Enter the following parameters under Master Register Read:
        • Register Address : 0x03
        • Address Width : 1 byte
        • Number of Data Bytes : 18
      4. Click the Master Register Read

For more information about I2C Combined formats, please refer to the Philips I2C Bus Specification at www.I2C-Bus.org.

Additional resources that you may find helpful include the following:

We hope this answers your question. Looking for more information? You can contact us and request a demo that applies to your application, as well as ask about our Total Phase products.

Request a Demo

The New USB4 Spec: Faster Speeds, Optimized Video Transfer, and Thunderbolt 3 Compatibility

$
0
0
USB4 Connector Original photo: Tony Webster

There has been a lot of buzz with USB lately – with all the new specifications being released and even the rebranding of the latest USB 3.x specifications, it seems there are new USB updates coming out left and right. Now, USB-IF has introduced an even newer specification to add to the mix: USB4.

What are USB4 Key Features?

What is the newest USB4 specification and how does it different from previous releases? The new upcoming spec was only recently announced by the USB Promoter Group. The USB Promoter Group has only released its roadmap to USB4, so no definite USB4 specifications have been released thus far. The most prevalent updates include increased data transfer rates, compatibility with Thunderbolt 3 devices, and enhanced resource allocation for video.

USB4 connects with Thunderbolt 3 Technology Photo by Maurizio Pesce

USB4 and Thunderbolt 3 Technology

Thunderbolt 3, created by Intel, introduces a technology that governs some of the most advanced data transfer rates, power charging, and video display capabilities all in one cable – intended to be the cable of all cables. Recently, Intel made the move to release the Thunderbolt 3 technology to the industry where manufacturers could now use this standard royalty free, allowing USB4 to become an open standard across multiple devices. By combining USB and Thunderbolt 3, this will allow interoperability between the two and create an overall standardization of ports, cabling, and protocols.

As per usual, USB4 will build upon and provide backwards compatibility with the preceding USB 3.2 and USB 2.0 architectures. However, one of the major differences USB4 has compared to previous USB specs is its compatibility with Thunderbolt technology, specifically Thunderbolt 3. Because USB4 uses the Thunderbolt 3 standard as its foundation, there are many comparable features between the two. Like Thunderbolt 3, USB4 will continue to utilize USB Type-C as its connector type, benefiting from its high-performance capabilities.

A Blend of Faster Speeds and Enhanced Video Display

USB4 will also match the incredibly fast data transfer speeds of Thunderbolt 3 with 40 Gbps - twice that of the preceding USB 3.2 2x2 spec, by using a two-lane operation of up to 20 Gbps each. It will also continue to comply with the advanced Power Delivery protocol, allowing for higher wattages and better power management, supporting fast charging with up to 100W of power.

The Type-C connector has allowed cables to become capable of transferring not only data, but video protocols as well. The USB4 spec will optimize the blend of data and display over a single connection by having them share the total available bandwidth over the bus, allowing the support for two 4K display connections or a single 5K display.

The USB Operator Group mentioned in their latest statement regarding USB4, “As the USB Type-C connector has evolved into the role as the external display port of many host products, the USB4 specification provides the host the ability to optimally scale allocations for display data flow.”

The official USB4 specification has not been released, so all details on features concerning Thunderbolt 3 with USB4 are still to come.

How Total Phase Can Support your USB Application

Total Phase offers a variety of debugging and development tools for USB. We offer an extensive line of USB protocol analyzers for USB 2.0/USB 3.1 Gen 1 specs, all being able to non-intrusively monitor and debug USB data on the bus in real time. For Type-C specifically, the USB Power Delivery Analyzer allows users to capture and debug PD negotiation occurring on the CC1 and CC2 lines without disturbing USB data, and supports the latest PD 3.0 spec and offers DisplayPort VDM decoding. Our Advanced Cable Tester v2 also provides a comprehensive testing tool that measures the safety and quality of cables, supports a variety of USB and video cables. It supports signal integrity testing of cables up to 12.8 Gbps per line, supporting the latest USB 3.2 Gen 2x2 spec and HDMI 2.1.

For more information on how Total Phase can help with your USB application, please contact us. You can also request a demo that applies to your application.

How Can I Send Data Packets Greater than 8 Bits to an SPI Device without Clock Stoppage or other Midstream Delays?

$
0
0

 

Question from the Customer:

I have been using the Cheetah SPI Host Adapter for years, and it has always worked for me. But today, I have a setup that is giving me some problems. I’m testing a new SPI device and I need to send more than 8 bits per packet. If there is any delay within that stream, such as clock stoppage every 8 bits, communicating with that device fails. The data must be sent as 32-bit words. How can I send 32-bit words using the Cheetah adapter?

Response from Technical Support:

Thanks for your question! For your system requirements, we recommend the Promira Serial Platform with the appropriate level of SPI Active application: Level 1, Level 2, or Level 3. The Promira platform is the only SPI adapter that can perform the function you’re looking for. The Cheetah adapter can provide 16-bit data transactions with no delays, but it does not support the full range of word sizes that you need.

How to Use the Promira Serial Platform for Various Word Lengths

To vary the word lengths, you will be using Promira Software API, which supports multiple operating systems and several programming languages. Also, example programs are provided that can be used as-is or modified for your specifications.

The Word Lengths the Promira Platform Supports and How

For the Promira platform, the word size can be set from 2 bits to 32 bits without the “clock stopping” inter-byte delay.

For your usage, the API command ps_queue_spi_write should work. The argument includes the parameter word_size that can be used to specify the packet length of each burst.

Here are details about this API command that writes a stream of words to the downstream SPI slave device:

int ps_queue_spi_write (PromiraQueueHandle queue,

PromiraSpiIOMode   io,
u08                word_size,
u32                out_num_words,
const u08 *        data_out);

Arguments:

queue       handle of the queue
io                IO mode flag as defined in table 12
word_size     number of bits for a word; between 2 and 32
out_num_words       number of words to send
data_out         pointer to the array of words to send

Return Value:

A status code is returned with PS_APP_OK on success.

Specific Error Codes:

None.

For more information about API commands, please refer to the API Documentation section of the Promira Serial Platform I2C/SPI Active User Manual

Additional resources that you may find helpful include the following:

We hope this answers your questions. If you have other questions about our software, host adapters or other Total Phase products, feel free to email us at sales@totalphase.com. You can also request a demo that is specific for your application

Request a Demo

What is Digital Security in 2019?

$
0
0

Digital technology is more important than ever. Not only are items like smartphones, tablets, and laptops ubiquitous with consumers and businesses, emerging trends like IoT (Internet of Things) and Industry 4.0 are introducing digital technology to new applications regularly. The flip side of this topic is that digital security is also now more important than ever.

Everything from personal healthcare information to trade secrets to financial data to how “smart” vehicles operate is digital in 2019. This means stakes are high when it comes to digital security. A data breach can have severe personal or professional consequences. Further, in some cases, such as with Struxnet, a computer virus can affect international relations and nuclear programs.  

Given the broad scope of digital technology, there are a variety of layers to digital security in 2019. In this piece, we’ll dive into the different areas of cybersecurity, discuss why they are important, explain how information security has evolved over the years, and explore the importance of digital security for embedded devices.

What is Digital Security?

Since digital security has such a broad scope, it is useful to define the term before exploring the different aspects of digital security. There are varying definitions depending on the source you reference, but at a high level, they all pertain to protecting digital data from unwanted or unauthorized access. Other terms often used to reference digital security include information security and cybersecurity.

Digital security keeps digital information and assets safe Digital Security

 

Why is Digital Security Important?

In short, digital security is important because it helps keep digital information and assets safe. Poor security posture and ill-advised cybersecurity practices can lead to financial loss or even physical damage. To help conceptualize the potential damage cyber-attacks can create, consider the statistics below:

  • The average cost of a data breach is $3.86 million USD
  • Identity theft has impacted almost 60 million Americans
  • Cryptojacking attacks have surged in recent years, with Semantic blocking 8 million cryptocurrency mining-related events in just December 2017
  • The United States is the biggest target for cybersecurity attacks, accounting for 38% of targeted attacks from 2015 to 2017

Reference: Symantec Corporation/Norton

While there will always be day zero threats and risks associated with digital technology, digital security is important because it helps reduce risk and increase the ability of consumers and businesses alike to operate safely in our digitally connected world.

What are the Different Aspects of Digital Security?

Digital security includes a variety of subtopics and subdomains. Here we will review some of the most important and discuss some trends and tips to help you in 2019.

Cryptography

Cryptography deals with encoding communications and data in a way that makes it illegible unless you have the means to decode it. By leveraging modern cryptographic algorithms and ciphers, developers can help ensure that their applications and devices remain secure. In 2019, developers should not be using hash and encryption algorithms like MD5 or SHA-1 for secure applications and consider using algorithms such as scrypt or SHA-3 when needed. Similarly, for networked applications, TLS 1.2 and TLS 1.3 should be favored over SSL v2, SSL v3, TLS 1.0, and TLS 1.2.

Authentication and Authorization

Authentication and authorization deal with ensuring a user is identified in a system and authorized to access a given function or piece of information. An example of poor authentication and authorization practices can be found in the previously linked Norton article where a “smart” door lock could be unlocked over the Internet without using a password.

Developers of such devices can no longer be lax in implementing proper security practices. Leveraging methods such as Multi-factor Authentication (MFA), SSH keys, and secure remote authentication methods can help avoid such egregious oversights.

Firewalls, Anti-virus, Threat Detection, & Access Controls

Firewalls and access controls help restrict network access to devices. This form of cybersecurity is an important part of keeping data secure, particularly for Internet-connected devices. For example, a modern firewall can prevent access to a device if the connection is attempted using a particular port or protocol. Access control lists (ACLs) help restrict access based on IP (Internet Protocol) address to help prevent unauthorized access. Developers of embedded devices can enhance the security of their solutions by enabling firewall or ACL functionality within their devices.

Anti-virus and threat detection software can also help businesses detect, contain, and respond to digital security threats. Artificial intelligence (AI) and machine learning (ML) are making a big impact in this area. AI and ML enabled cybersecurity appliances such as Intrusion Prevention Systems (IPS) and Intrusion Detection Systems (IDS) can rapidly identify threats and enable businesses to improve their information security posture.

User Behavior

Humans that have access to sensitive data should be taken into consideration when implementing a security strategy. Social engineering and tactics like phishing are common attack methods hackers use to infiltrate networks. This means a more informed user base can lead to better digital security.

If your user base includes the type of individual that is continuing browsing after warnings from a browser that a site is insecure, data can be more easily compromised. For this reason, user education and training is important when implementing a plan to strengthen digital security posture. For example, educating users to only install signed USB drivers from trusted sources and how to tell the difference between a website secured by HTTPS compared to a website that uses HTTP only are excellent ways to educate a user base.

From a developer’s standpoint, it is important to keep in mind that businesses in 2019 are now investing in educating that user base on the importance of digital security. This is evidenced by the growth of companies such as KnowBe4 that focus on educating users on security tips. This means developers must be sure to design devices that use only signed and trusted drivers and create applications that support secure network protocols and access methods.

How has Digital Security Evolved?

It is important to understand that, like many other aspects of technology, digital security moves fast. This is particularly important because outdated security protocols and practices can lead to vulnerabilities that hackers can exploit. Before digitization, information security was mostly a function of ensuring secure access methods to physical copies of data.

While the importance of physical security still remains, things have become much more complex. The 1990s saw the rise of anti-virus programs and the more widespread use of firewalls. The 2000s ushered in cloud computing which changed the cybersecurity paradigm. The explosion of IoT and smart devices in the 2010s have made digital security more important than ever. Now we must be able not only to secure devices users access but also devices that act autonomously.

For example, CAN Bus hacking is a major threat vector that can impact a variety of smart devices, including smart vehicles.

How does Total Phase Help Enable Digital Security?  

Total Phase offers a variety of tools for debugging, developing, and monitoring embedded systems. As IoT and smart devices continue to grow, our solutions that can be leveraged through the entire product lifecycle will help developers design secure and robust products.

For example, our CAN monitoring and debugging tools help enable developers to design robust CAN systems. Similarly, our Promira Serial Platform can be leveraged by an engineer to help design an IDS based on specifications.

Summary

As our lives become more digitally connected, digital security will continue to grow in importance. As we close out the decade, protecting smart devices, embedded systems, and all aspects of IoT and Industry 4.0 should be a priority for businesses and consumers alike. Sound digital security requires accounting for a variety of attack vectors and subcategories of security, including user education. Total Phase can help engineers and developers by providing industry-leading debugging, monitoring, and programming tools to aid in the design of robust and secure systems.

Request a Demo

Viewing all 822 articles
Browse latest View live


Latest Images