Quantcast
Channel: Total Phase Blog
Viewing all 822 articles
Browse latest View live

How Do I Start Working with SMBus Devices?

$
0
0

Question from the Customer:

I need to program and monitor an SMBus device. Which Total Phase instruments could I use for this project?

Response from Technical Support:

Thanks for your question! The System Management Bus (SMBus) protocol is a derivative of the Inter-Integrated Circuit (I2C) protocol. Because of their similarities, our I2C host adapters and I2C protocol analyzer can be used with your SMBus project.

Comparing I2C and SMBus Protocols

I2C and SMBus are both 2-wire buses using a master and addressable slaves. SMBus is primarily used on PC motherboards and in embedded systems for monitoring critical parameters such as voltage supply, temperature, and fan control.

There are some differences about what these two protocols support and how they function. Both buses support a bitrate of 100 kHz, however I2C does support higher speeds so compatibility is only available below 100 kHz. Additionally, SMBus specifies a minimum clock speed and 35 ms timeout interval, while the I2C bus can hold the clock low as long as necessary.

For more information on these differences, you can take a look at these articles:

I2C Tools that Work with SMBus Devices

Here is a table so you can quickly compare the Total Phase devices that support I2C. (Note: the Cheetah SPI Host Adapter only supports the SPI protocol). The other tools can support both I2C (SMBus) and SPI protocols.

Compare the features of the Promira and the I2C/SPI Active Applications, and the Cheetah and Aardvark host adapters.

Following is a summary of the Total Phase devices that can support your project.

I2C and SMBus Host Adapters

We have two host adapters that are compatible with the SMBus.

Aardvark I2C/SPI Host Adapter

The Aardvark I2C Host Adapter is a fast and powerful bus host adapter that communicates with your computer via USB, and can function as a Master or a Slave. In addition to I2C and SMBus devices, this adapter also interfaces with SPI buses.

Promira Serial Platform

The Promira Serial Platform is an advanced platform that can communicate faster via Ethernet or USB, provides built-in level shifting, and more. The license options (purchased separately) support a range of protocols. For your project: I2C Active - Level 1 Application is recommended. The Promira platform can function as an I2C Master or I2C Slave.

I2C and SMBus Protocol Analyzer

The Beagle I2C/SPI Protocol Analyzer can be used to monitor and capture real-time I2C data. The Data Center Software offers SMBus decoding when using the Beagle I2C/SPI analyzer. To enable SMBus decoding, select the SMBus Decoding option in the I2C Configuration Manager dialog. For more details on how to decode SMBus transactions in the Data Center Software, please see the SMBus section 8.5.2 in the Data Center Software User Manual.

Other SMBus Tools

Test and Development

  • With its I2C Port Expander, the I2C/SPI Activity Board can be used to test and develop SMBus devices.
  • The I2C Development Kit includes the activity board as well as the Aardvark I2C/SPI Host Adapter, the Beagle I2C/SPI Protocol Analyzer, and cables. This is a powerful and cost-effective way to start a new project.

Software Applications

Using Our Tools with SMBus

In addition to the information in the user manuals, here are two examples of using our tools with the SMBus:

Additional resources that you may find helpful include the following:

We hope this answers your question. Looking for more information? You can contact us and request a demo that applies to your application, as well as ask about our Total Phase products.

Request a Demo


How Can I Resolve Buffer Overflow for Asynchronous CAN Messages?

$
0
0

A Question from the Customer:

I have a question about communicating with an external CAN device. I'm working on a PC application to transfer data over the Komodo CAN Duo Interface. The bus is configured to run at 1 MHz and I’m starting this project with 16KB of data. When that is successful, the data blocks will be increased to megabytes of data.

With the current architecture, data is split into 32-byte "packets" plus five bytes of header information. This packet structure is then split into 8-byte CAN packets for transfer via the Komodo interface. Each of the 37-byte packets is acknowledged with a 2-byte ACK message from the receiver. The five 8-byte CAN packets are not ACK’d individually. I am using the Komodo Software API.

Here’s a summary of what I’m doing:

  1. Call the km_can_async_submit() function 5 times with the 37 bytes of data split into 4 8-byte messages and 1 5-byte message.
  2. Call km_can_read() configured with a 10 second timeout and wait for a successful 2-byte acknowledge response before continuing with the next 37-byte transmission.
  3. Repeat this process until the entire 16 KB of data has been transferred.

The known issues and fixes:

I can send the first 12 37-byte packets successfully and receive an acknowledgement. However:

  1. No acknowledgement is received for the 13th
  2. I’ve worked with the Komodo timeout value as well as the latency. Lowering the latency does make the process faster before failing, but the failures still occur.
  3. I've also tried multiple calls to km_can_read() thinking the missing ACK message got buried under some empty messages, but even after multiple calls to km_can_read() the responses are always 0.

The “work around” that I am using:

  • I can transfer data by resetting the Komodo interface after every 11 packets by calling km_close() and KomodoApi.km_open().

The improvements I’m looking for:

  • It works – but is there a way to do with without closing and re-opening the transmission? Is there a buffer that’s not being cleared unless the Komodo interface is re-initialized?

Response from Technical Support:

Thanks for your question! As implied in your statement, the issues you observed are due to buffer overflow. The solution we have for you is relatively simple. To better understand how the solution works, we will first go over the mechanics of latency settings, asynchronous messages, and what affects the data buffer.

The Effects of Latency Settings

Setting a small latency can increase the responsiveness of the read function. Please note, there is a fixed cost to processing each individual buffer that is independent of buffer size. Here are the trade-offs to consider:

  • Setting a small latency value increases the overhead per byte that is buffered.
  • Setting large latency value decreases that overhead, but it increases the amount of time that the library must wait for each buffer to fill before the library can process its contents.

How the Latency Function Works

The Komodo km_latency function sets the capture latency to the specified number of milliseconds. For example, if latency_ms is set to “1”, then the latency is 1ms.

The capture latency effectively splits up the total amount of buffering into smaller individual buffers (sub-buffers). After one of these individual buffers is filled, the read function returns.

  • To fulfill shorter latency requirements, these individual buffers are set to a smaller size.
  • If a larger latency is needed, then the individual buffers are set to a larger size.

In other words, if the latency_ms parameter value in the km_latency function is small, then the sub-buffers size is small. If the latency_ms parameter value in the km_latency function is big, then the sub-buffers size is big.

Latency vs. Timeout

The latency setting is distinctly different from the timeout setting. Latency should be set to a value that is less than the timeout.

Buffering Asynchronous Messages

There is buffering within the Komodo DLL, on a per-device basis, to help capture asynchronous messages. Here is a case of a Komodo interface receiving CAN messages asynchronously:

  • If the application calls the function to change the state of a GPIO while unprocessed asynchronous messages are pending, the Komodo interface will modify the GPIO pin, but also save any pending CAN messages internally.
  • The pending messages will be held until the appropriate API function is called.

Active and Passive CAN Modes

The Komodo interface can be configured as an active CAN node, or as a passive CAN monitor. A CAN channel can receive messages asynchronously with respect to the host PC software. Between calls to the Komodo API, these messages must be buffered somewhere in memory. This buffering is accomplished by the operating systems of the PC host. The buffer is limited in size; when the buffer is full, bytes will be dropped.

Buffer Overflow and Asynchronous Messages

An overflow occurs when the Komodo interface receives asynchronous messages faster than the rate at which they are processed: the receive link becomes "saturated". This condition can also affect synchronous communication with the Komodo interface.

How to Alleviate Buffer Saturation

There are two ways to relieve the receive saturation problem:

  • Reduce the amount of traffic that is sent by all CAN nodes between calls to the Komodo API. If you use this method, you will need to reconfigure the offending CAN device(s).
  • Poll the CAN channel to collect pending messages more frequently.

API Solution

Based on your request and setup, we recommend using the API command km_can_write with your script. This function acts as a wrapper for the asynchronous submit and collect functions.

km_can_async_submit(komodo, channel, flags,
packet, num_bytes, data);
km_can_async_collect(komodo, KM_TIMEOUT_INFINITE,
arbitration_count);

How this works:

  • The CAN packet is submitted asynchronously. km_can_async_collect is called with KM_TIMEOUT_INFINITE to continue blocking until a response is received.
  • A KM_CAN_ASYNC_PENDING error is returned if there are any uncollected asynchronously submitted packets.

 Note: Packets that are submitted with km_can_async_submit should always be collected using km_can_async_collect.

Additional resources that you may find helpful include the following:

We hope this answers your question. If you need more information, you can contact us and we’ll go over your requirements. You can also request a demo that applies to your application.

Request a Demo

With the Promira Serial Platform, How Do I Set the SPI Clock to 0 Volts after Closing the SPI Device?

$
0
0

Question from the Customer:

I am using the Promira Serial Platform to drive an SPI bus. For this application, I am using the Promira Software API I2C/SPI Active wrapped in my custom code. My issue: I need the clock (SCLK) to reach 0V in its off state, but so far, that is not yet achieved.

What I see:

  1. I have a scope channel probe in-line with SCLK line from the Promira platform to the target hardware.
  2. After the code segment provided, the scope channel shows about 2V.
  3. In this application, I need 0V for the off state. However, the line voltage on the bus is back-powering the device, and does not allow it to reach the OFF state.

The current code sequence:

  1. Connect the Promira platform via USB
  2. Measure bus interface signals at connector = 0.0V
  3. Initialize Promira platform to drive SPI at voltage drive levels of 1.8V
  4. Communications work properly and no issues present
  5. Close driver
  6. Check pin states and I see that the SCLK remains at 1.8V

Closing the device, as shown in application notes:

public static void closeTool(bool echo)
{
SpiMasterOE(g_channel, g_queue, 0, echo); // Disable master output
Promact_isApi.ps_queue_destroy(g_queue); // Destroy the queue
DevClose(g_pm, g_conn, g_channel); // Close the device and exit
}

Details of my setting up the clock phase:

Promact_isApi.ps_spi_configure(g_channel,
PromiraSpiMode.PS_SPI_MODE_3,
PromiraSpiBitorder.PS_SPI_BITORDER_MSB,
0);

I am considering a different approach, such as closing GPIO to drive the SCLK pin to 0V:

  1. Close Promira SPI
  2. Reopen as GPIO
  3. Set all as output with value 0x00
  4. Close GPIO

Could that work? What are your recommendations to drive the SCLK pin to 0V?

Response from Technical Support:

Thanks for your question! Based on the information you provided, we see that SPI Mode 3 is being used: PromiraSpiMode.PS_SPI_MODE_3. In this mode, the default state of the clock is high, which is why you are seeing 1.8V on the SCLK line. We have a solution for you that doesn’t require opening and closing GPIO lines.

GPIO in SPI Mode

Here are the reasons why opening and closing the GPIO lines will have not have the results you are looking for:

  • The SPI pins on the Promira platform are not shared with GPIO pins.
  • There are no pull-up resistors on the SPI clock and data lines.

API Script to Drive SCLK to 0V

Here is a solution to drive the SCLK to 0V by adding one line of code to the original device close, which changes the SPI Mode to 0. This way, SCLK will be 0V when the device is closed.

public static void closeTool(bool echo)
{
ps_spi_configure(channel, PS_SPI_MODE_0, PS_SPI_BITORDER_MSB, 0)
SpiMasterOE(g_channel, g_queue, 0, echo); // Disable master output
Promact_isApi.ps_queue_destroy(g_queue); // Destroy the queue
DevClose(g_pm, g_conn, g_channel); // Close the device and exit
}

The figure below shows the four SPI clock modes (SPI_MODE_0/1/2/3), which are defined by two clock parameters: clock polarity (CPOL) and clock phase (CPHA).

The Clock Polarities and Clock Phases affect the SPI Modes SPI Clock Modes
  • “sample” indicates on which edge of the SPI clock the data is latched.
  • For both MODE 0 and MODE 3, data is latched on the rising edge.

However, with the difference in polarity, changing the mode from MODE 3 to MODE 0 ensures SCLK will be 0V when the device is disabled. For more information, please refer to the section SPI Modes in the Promira Serial Platform I2C/SPI Active User Manual.

We hope this answers your question. Additional resources that you may find helpful include the following:

If you have any questions or want more information about Total Phase tools, you can email us a message or request a demo that is specific for your application.

Request a Demo

How Medical Embedded Systems Transformed the Healthcare Industry

$
0
0

An embedded system is a mix of hardware and software that works as a computing system within a larger system. The embedded system dates back to aeronautics in the 1960s when astronauts used Charles Stark Draper’s integrated circuit to collect flight data in real time. Since then, and with the explosive growth of technology and cloud computing, embedded technologies have been introduced in nearly every field of work. In the healthcare industry, embedded systems have come a long way to supplement patient care.

Rather than a check-up and a diagnosis, doctors can use medical equipment to further analyze the patient’s symptoms. Scientists, researchers, and medical professionals have worked together for years to create the highest quality equipment possible. MRI machines, X-ray technology, ECG machines, and CT scanners are just a few medical devices that implement embedded technologies nowadays.

So what are medical embedded systems?

Similar to embedded systems in other industries, medical embedded systems are allowing patients to monitor their health from home, and they also make for stronger understanding between patients and healthcare professionals. Devices are getting smaller and smarter, making them easier to use and easier to understand.

Medical devices can be intimidating and daunting to use. Making them more streamlined and efficient for patients makes their overall healthcare experience more positive. The Internet of Things (IoT) has seen exponential growth in recent years, in part due to the fact that smart technology simply makes things easier. So how are healthcare professionals using it? They are keeping doctors more attuned to their patients’ health and they are keeping patients more aware of their own health.

medical embedded system Image courtesy of Pixabay Application of an embedded system in the medical field

Track your own health

Being able to monitor your own health gives you the tools you need to more fully understand what is going on in your body and how you can help it. Medical equipment such as glucose monitors can aid those with diabetes in keeping an eye on their blood sugar levels, something that is crucial for diabetic health.

Instead of the constant finger pricks to test blood sugar, a small sensor can be inserted underneath the skin that provides a consistent read on glucose levels. The information from the sensor will be sent to your smartphone or another connected device for you to check at any time.

Fitness trackers work similarly, though they aren’t inserted underneath the skin. Embedded technologies in Apple Watches and Fitbits can track your heart rate, activity levels, and body composition to keep you aware of whether or not you are meeting your fitness goals. If your doctor asks you to lose a certain amount of weight, this technology can hold you accountable and track progress to your goals. Through the use of connected apps, you can also receive tips on nutrition and different ways to modify your workouts to maximize your abilities.

Remote monitoring

Having the ability to analyze patient information is helping doctors more accurately detect when new issues or imbalances arise. In recent years, medical embedded systems such as pacemakers have changed the outlook on cardiac health. Essentially, the embedded technology in the pacemakers makes them work as a mobile EKG machine; the sensors alert doctors to inconsistent heartbeats and they also offer a full report on the health of the patient’s heart.

Similar to the health app on iPhones or the sensors in watches with embedded technologies, but more advanced, are the modern CPAP machines. Healthcare professionals can monitor the sleep schedules of patients with sleep apnea outside of the hospital. The machine goes home with the patient, but the sensors in it notify the doctor about poor sleep habits so the doctor can then reach out to his or her patient to find a solution.

In the hospital, a bed is no longer a bed — now, it’s a smart bed. Self-monitoring patients throughout the night can be time-consuming and tedious. To help nurses maximize their time with patients, smart beds can sense when a patient needs to be readjusted and the intelligent programming in it will make the adjustments automatically. If a patient is moving a lot or trying to get out of bed, this will send a notification to the nurse’s connected device to come to the right room at the right time to check on the patient.

While some medical embedded systems are within the hospital, others are taken home. Their portable nature is extremely helpful for the patient, and though doctors do get notified of fluctuations in health, the patient also has a responsibility to properly track and maintain their health. The purpose of the advanced technology and wearable devices is to allow patients the freedom to live their daily lives and not have to take time out to get checked up as frequently.

Prosthetics and embedded sensors

Intuitively connected devices are now helping researching and healthcare professionals make strides in the world of prosthetics. Losing a limb is a painful, difficult time for a patient. They must learn how to function without it, they experience phantom limb, and if they choose to wear a prosthetic, they must adjust to the new appendage.

Medical embedded technologies have been changing the way patients adapt to a fabricated limb. Usually, the frustration lies in the fact that it’s hard to tell a limb not connected to your brain how to move. With embedded systems, healthcare professionals and researchers can study neurotransmissions from an implanted neuromusculoskeletal interface to track sensory feedback. The information allows prosthetic developers to track a patient’s prosthetic control and motor intent. The bioelectric signals can then help adjust prosthetic functionality and make them more reliable for day-to-day tasks. It also makes for a more comfortable experience for the patient and puts their wellbeing at the center of the process.

Smart technology and clinical care

A big takeaway from all this is that healthcare is adapting to the shift of connected devices and the Internet of Things, but it is by no means removing the need for healthcare professionals.

Medical equipment needs to advance with technology to better detect patient symptoms and allow doctors to analyze those symptoms based on reports. The smart technology makes for more accurate readings of health and can make adjustments for patients with minimal human intervention, but this does not mean humans are not a necessary part of the applications. Doctors and nurses are using medical embedded systems to supplement their work and make for stronger, more well rounded patient care. They are more in touch with what their patient needs at more precise times. They can find abnormalities faster and detect potential trauma sooner.

Preventative care is proactive care. Smart technology can sense differences in the body much sooner than a patient might feel the symptoms. Embedded systems provide patients with a better understanding of their personal health and can thus help them pay more attention to what they need to be doing to care for their bodies.

Embedded, connected devices track medical conditions from anywhere and can allow patients to live a balanced life, free from constant doctor’s visits and trial and error medications.

Conclusion

Technicians, medical professionals, and engineers are seeking improvements to patient health daily. Monitoring devices are becoming more compact and easy to use. Sensors and pacemakers are getting smarter. The healthcare industry is making strides to improve the accessibility and proactivity of medical equipment. There is less room for error and more room for growth when doctors, nurses, and patients use embedded systems to stay attuned to their health.

 

Is there a Way to Adjust the Duty Cycle of the I2C Master Clock?

$
0
0

How to manage a locked I2C bus.Source: JanBaby 

Question from the Customer:

When I use Aardvark I2C/SPI Host Adapter on a board for I2C, and the pull-ups are weak (10K on board, Aardvark adapter pull-ups are not enabled), the I2C transactions work fine with 100 KHz. But if I change the bitrate to 400 KHz, I2C transactions quickly fail with status code 6 - the bus is locked. I observed that after generating a stop, it appears the Aardvark adapter holds the SDA line low.

I’ve been looking at the details of the signals, but so far, I have not yet located the cause. Do you have any suggestions of what to look for that causes bus lock?

Also, I have had no luck recovering from the bus lock condition and releasing the SDA line. I’ve tried using the Aardvark Software API command aa_i2c_bus_free, but I have had to disconnect, power cycle, and then reconnect the Aardvark adapter.

Response from Technical Support:

Thanks for your questions! The bus error is a generic error that is received from the hardware. A bus error occurs when a START or STOP condition occurs at an illegal position in the format frame.

Examples of illegal positions occur during the serial transfer of an address byte, a data byte, or an acknowledge bit. Such bus errors rarely occur - it is possible there is an incorrect configuration or condition in the setup that you are working on. We can provide information about possible causes of bus locked condition and show you how to easily release the Aardvark adapter from the bus lock.

Lockout Status

The status code for lockout is AA_I2C_STATUS_BUS_LOCKED, which indicates an I2C packet is in progress and the time since the last I2C event executed or received on the bus has exceeded the bus lock timeout. Most likely, this is due to the clock or data line of the bus being held low by some other device, so the Aardvark adapter cannot execute a start condition.

For information about other extended status codes, please refer to the section I2C Interface of the Aardvark I2C/SPI Host Adapter User Manual.

What Triggers Bus Lock

The bus lock timeout is measured between events on the I2C bus. An event can be a start condition, the completion of 9 bits of data transfer, a repeated start condition, or a stop condition. For example, if the full 9 bits are not completed within the bus lock timeout (due to clock stretching or some other error), the bus lock error will be triggered.

  • Please note: When the Aardvark adapter detects a bus lock timeout, it will abort its I2C interface, even if the timeout condition occurs in the middle of a byte transfer.
  • When the Aardvark adapter is acting as an I2C master device, this may result in executing only a partial byte on the bus.

Bus Lock Timeout

You can use an API function to set the duration of the bus lock. This way, you will no longer need to disconnect and power cycle the Aardvark adapter when a bus lock occurs.

The Aardvark API function to use is aa_i2c_bus_timeout, which sets the I2C bus lock timeout in milliseconds.

int aa_i2c_bus_timeout (Aardvark aardvark, aa_u16 timeout_ms);

Arguments

aardvark           handle of an Aardvark adapter
timeout_ms    the requested bus lock timeout in ms.

Return Value

This function returns the actual timeout set.

Specific Error Codes

None.

The power-on default timeout is 200ms. The minimum timeout value is 10ms and the maximum is 450ms. If a timeout value outside this range is passed to the API function, the timeout will be restricted.

  • The exact timeout that is set can vary based on the resolution of the timer within the Aardvark adapter. The nominal timeout that was set is returned back by the API function.
  • If the bus is locked during the middle of any I2C transaction (master transmit, master receive, slave transmit, slave receive) the appropriate extended API function will return the status code AA_I2C_STATUS_BUS_LOCKED.

Additional resources that you may find helpful include the following:

We hope this answers your question. Looking for more information? You can contact us and request a demo that applies to your application, as well as ask about our Total Phase products.

Request a Demo

What are the differences between DisplayPort vs HDMI?

$
0
0

Display Port vs. HDMIOriginal photo by Csaba Nagy

HDMI and DisplayPort are the two recognized standards for transmitting video and audio data over a single cable. Throughout the years, these two standards have evolved greatly with each new specification, each providing enhanced performance and capabilities, including major increases in bandwidth, speed, and other supported features.

Because both standards are widely used throughout the industry, it can be tough to pinpoint their differences and why one standard might be best used for certain applications. Here, we’ll help discern the two by providing their backgrounds and discussing their current features and capabilities.

Background of HDMI and DisplayPort

HDMI, or High-Definition Multimedia Interface, was first introduced in 2002 as a standard to allow for a single cable to transfer uncompressed high-definition video, multi-channel audio, and data over a single digital interface. A group of display manufacturers, including Hitachi, Panasonic, Philips, Silicon Image, Sony, and Toshiba, formed the HDMI organization to conceptualize and oversee the development of this standard.

Its conception was driven by a need for a cable to connect video sources to displays, including mainly consumer-electronics applications like DVD and Blu-ray players, TVs, and video projectors. Today, HDMI has been widely adopted and we commonly see HDMI ports on a large number of televisions and computers in our homes.

The HDMI connector includes 19 pins and offers 3 different connector types including the Standard Type-A, the mini Type-C and the micro Type-D. You might recognize the Standard Type-A HDMI connector as it is the most common.

In 2006, the DisplayPort (DP) standard was conceptualized as a new type of digital display interface, intended to phase out outdated VGA and DVI ports. It initially focused on computer displays and professional IT equipment. The DisplayPort standard is administered by the VESA (Video Electronics Standards Association), which is overseen by multiple PC and chip manufacturing companies including Apple, AMD, and Intel.

DisplayPort has 20 pins and has 2 different connector types: DisplayPort and mini DisplayPort. The mini DisplayPort was introduced by Apple and is used in various Apple MacBook PCs, swapping out the previous DVI/VGA ports. Many of Apple’s Thunderbolt ports use the mini DisplayPort connector.

HDMI 2.1 vs DisplayPort 1.4/2.0 Features and Performance

Resolution, Bandwidth, and Display Features

When comparing resolution and bandwidth between the two standards, DisplayPort’s version 1.4 delivers a maximum payload bandwidth of 25.92 Gbps over 4 lanes, and is the first standard to support 8K resolution at refresh rate of 60 Hz with full-color 4:4:4 resolution and 30 bits per pixel (bpp) for HDR-10 support. However, the most recent published DP spec, 2.0, triples the data bandwidth performance up to a maximum payload of 77.37 Gbps, and allows for several configurations with multiple displays that go beyond 8K. Like DP 1.4, DP 2.0 will feature Display Stream Compression (DSC), which enables visually lossless compression for ultra-high

definition display applications, Forward Error Correction (FEC), and HDR metadata transport.

The most recent HDMI 2.1 spec delivers a bandwidth of 48 Gbps over 4 lanes, and also supports 8K resolution with dynamic HDR at a refresh rate of 60 Hz or a 4K resolution with dynamic HDR at a refresh rate of 120 Hz. HDMI 2.1 focuses on stepping up its game for viewing games, movies, and video with its enhanced refresh rates, making the display images smoother and more seamless. Specifically, this spec adds Variable Refresh Rate (VRR) to reduce and eliminate lag, Quick Media Switching (QMS) to eliminate image delays on the screen before content is displayed, and Quick Frame Transport (QFT) to reduce latency resulting in less lag and a more real-time feel for interacting with virtual reality. HDMI can also incorporate an Ethernet channel allowing devices to share a wired internet connector that can carry data between two connected devices.

Multiple Display Capabilities

One of the major benefits of using DisplayPort is its ability to connect multiple displays together, which HDMI cannot do. One feature within the DisplayPort standard, called Multi Stream Transport feature, or MST, allows for the video source to send multiple independent video signals over a single DisplayPort output. With this feature, devices can be connected through an external hub, or as with Thunderbolt, devices can be linked together through a method known as daisy chaining.

 

DisplayPort multiple displays connected together.Photo by Tom Pijnappel

Also, with VESA’s Coordinated Video Timing standard, there is enhanced interoperability between video sources and displays, creating better compatibility with formatting, refresh rates, and timing specifications.

Unlike DisplayPort, HDMI can handle one single video and audio stream, meaning it can only support one display at a time. The MST feature in DP is not natively supported by HDMI, but can be achieved by using DisplayPort to HDMI hubs (if you have a DP connection on the source device).

Audio

While both HDMI and DisplayPort support high quality audio, HDMI includes a feature not implemented in DP called ARC, or Audio Return Channel. This feature allows users to conveniently send television audio back to the A/V receiver or any other sound system being used with one single HDMI cable; this would normally require a second audio-only cable in the mix. Furthermore, HDMI 2.1 improves the ARC function, by introducing eARC, or Enhanced Audio Return Channel, which provides improved audio quality due to the increased audio bandwidth allocation up to 37 Mbps.

Which Cable Should I Use?

These days, HDMI can be found on most television sets and is considered the standard for video and audio transmission between a video source like a DVD, Blu-Ray Player, or PC and a video display monitor. DP however, more targeted to be the go-to standard for display interfaces with computers, is also suitable for PC gaming and connecting video game consoles and video graphics cards. However, because each cable provides very similar functions for transferring video and audio, choosing between the two is a matter of your specific set up and requirements, especially as we continue to see improvements of each cable standard with each new specification release.

Comprehensively and Quickly Test HDMI and DisplayPort Cables

With the Advanced Cable Tester v2 introduced by Total Phase, cable manufacturers can now comprehensively and quickly test both HDMI and DisplayPort video standards against our set of assessments derived from the relevant cable specification from HDMI.org and VESA. We currently support:

Manufacturers can determine the quality of the video signals through our complete set of tests, including continuity testing with checks for shorts and opens, DC resistance measurements, and signal integrity testing of data lines up to 12.8 Gbps per channel. Now, instead of only performing functional tests on each cable, testers can easily insert their desired cable and determine the signal lock and overall quality cable in a matter of seconds. Cable certification is simply not enough – the Advanced Cable Tester v2 grants the ability to go beyond and uncover any inconsistencies that can and will occur during manufacturing and production.

For more information on this tool, please visit our website or email us at sales@totalphase.com.

Total Phase at Microchip MASTERs 2019

$
0
0

We just got back from Phoenix, Arizona, where luckily, we were able to spend the days inside at the 23rd annual Microchip MASTERs Conference! We enjoyed meeting everyone at the show and learning about the various projects of our customers.

 

Total Phase Display at Microship Masters 2019

 

Total Phase Analyzers - Live Demos

We had two demonstrations running at our booth. We had our Promira Serial Platform emulating an I2C master polling an accelerometer on a loop via an API script. As part of the same set up, the Beagle I2C/SPI Protocol Analyzer was monitoring the I2C bus and streaming the read and write data in real time. On the other end we were showing the Beagle USB 480 Power Protocol Analyzer’s ability to capture USB 2.0 data between a Mac and USB mouse while simultaneously capture the current and voltage measurements on VBUS.

Learning with Power Delivery and Protocol Analyzers

This year, our Beagle USB 480 Protocol Analyzer and our USB Power Delivery Analyzer had a lot of interest. The Beagle USB 480 analyzer allows users to non-intrusively monitor low, full, and High-speed USB 2.0 data in real time. The USB PD analyzer can non-intrusively monitor PD traffic on the CC1 and CC2 lines, capture PD negotiation, and monitor current/voltage on VBUS and VCONN. Both tools were featured in classes presented by Microchip.

While in town, we visited some of our customers that were interested in learning more about our tools and seeing what’s new. We were able to show our newest tool, the Advanced Cable Tester v2. It tests USB Type-C, as well as legacy USB and video cables for DC resistance, continuity, signal integrity, and e-marker.

Upcoming Events at USB-IF

Microchip MASTERs was a fun and productive show for Total Phase! You can find Total Phase next at the upcoming USB-IF Workshop #115 in Portland, Oregon, ESC Silicon Valley in Santa Clara, California, and USB-IF Dev Days in Seattle, Washington. Learn more about these upcoming events here.

 

How Can I Best Communicate with Multiple SPI Slaves?

$
0
0

Question from the Customer:

I have two Promira Serial Platforms; each has the SPI Active - Level 2 Application. How can I best communicate to multiple SPI slaves using separate SPI slave select (SS) signals?

Response from Technical Support:

Thanks for your question! There are two ways you can communicate through the SS signals: Control Center Serial Software and Promira Software API. Following are summaries of what each application can do for you.

Communicate via Control Center Serial Software

The Control Center Serial Software in an easy-to-use GUI application that provides access to Promira platform functions. Here is a summary:

  1. Connect the Promira Serial Platform to the Control Center Software.
  2. At the top menu bar, select Adapter and then click Multi I/O SPI.
  3. In the Multi I/O SPI window, select the SSn for the desired slave. The number of displayed Slave Select lines is dependent on how many slaves the attached device can support. You can also select the desired Bitrate.
  4. Set the command and address values.
  5. There is large text box that for inserting the data you want to send.
  6. Transactions are displayed below in the Transaction Log.

For more information, please refer to the article Which Tool Should I Use to Have One SPI Master Control Multiple Chips on an SPI Bus?

For a more automated process, you can use commands via an XML script in Batch Mode. For more information, please refer to the Batch Mode section in the Control Center Serial Software User Manual.

Communicate via Promira Software API

If you need more customized control, we recommend using Promira Software API. Using the API, you can create a new script or modify one of our sample scripts, which you can then run from the command line.

For communicating with multiple SPI slaves via separate SPI SS signals, we recommend taking a look at the API examples: spi_file and spi_slave. Both use the Promira platform for SPI slave implementation. Here is a key component of both examples:

To configure the SS lines, use a bitmask: 1 corresponds to enable; 0 corresponds to disable. The command to use is ps_spi_enable_ss.

Enable SS Lines (ps_spi_enable_ss)

int ps_spi_enable_ss (PromiraChannelHandle channel,
u08 ss_enable);Enable select SS lines and disable GPIO lines.

Arguments:

channel         handle of the channel
ss_enable     bitmask based on the 8 SS lines where 1 corresponds to enable and 0 to disable.

Return Value:

A status code is returned with PS_APP_OK on success.

Details

ss_enable is to enable which pins are configured to ss line instead of GPIO. The least significant bit is SS0.

For more information, please refer to the API Documentation section from the Promira User Manual.

Additional resources that you may find helpful include the following:

We hope this answers your question. Looking for more information? You can contact us and request a demo that applies to your application, as well as ask about our Total Phase products.

Request a Demo


5 Advantages of CAN Bus Protocol

$
0
0

What is CAN Bus Protocol?

The Controller Area Network (CAN) bus protocol is rapidly growing in popularity among engineers who work with high-level industrial embedded systems. The protocol was developed by Robert Bosch GmbH in 1986 to help further the development of electronic communications in the automobile industry. 

In the early 1980s, vehicle manufacturers were beginning to incorporate an increasing number of electronic devices, such as active suspension, gear and lighting control, central locking, and ABS into cars and trucks for the first time. For these electronic devices to function in unison, time tasks correctly, and share data, they would need to be wired together. 

Under the existing wiring standards, the electronic modules would communicate with each other using direct, point-to-point analog signal lines. Each module had a direct line connecting it to each other module that is needed to communicate with, an architecture that was time-consuming and used an excessive amount of wiring.

The CAN protocol eliminates the need for excessive wiring by allowing electronic devices to communicate with each other along a single multiplex wire that connects each node in the network to the main dashboard. The multiplex architecture allows signals to be combined and transmitted over the entire network along a single wire, such that each electronic module in the vehicle receives data from sensors and actuators in a timely fashion.

An advantage of the CAN protocol is that it allows the everyday devices, such as a car, to simply communicate with one another.

The CAN protocol was standardized by the International Standards Organization (ISO) in 1993 and has since been divided into two standards: ISO 11898-1, which describes the data link layer of the protocol, and ISO 11898-2 which describes the physical layer. The unique properties of the CAN bus protocol have led to its increased popularity and adoption across industry verticals that leverage embedded networks, such as healthcare, manufacturing, and entertainment.

The 5 Advantages of CAN Protocol

1. Low Cost

When the CAN protocol was first created, its primary goal was to enable faster communication between electronic devices and modules in vehicles while reducing the amount of wiring (and the amount of copper) necessary. This is accomplished through the use of multiplex wiring, which enables the combination of analog and digital signals and their transmission over a shared medium.

To understand how multiplexing drives down the cost of wiring vehicles, we need to know a bit more about how wiring architecture worked before the CAN bus protocol was created. Along with the electronic devices or modules that control vehicle subsystems, cars and trucks also have sensors and actuators that capture data from the vehicle's operation and communicate it to modules where it is needed. 

A vehicle would have sensors for capturing data about its speed and acceleration, but feeding that data would require dedicated wires to each individual data recipient - that's one wire to communicate with the airbag system, one wire to communicate with the ABS braking system, another dedicated wire to engine control, etc. With the CAN protocol, a single wire connects all of the electronic systems, actuators, and sensors in the vehicle into one circuit that facilitates high-speed data transmission between all components.

The first vehicle to use CAN bus wiring was the BMW 850 coupe released in 1986. Implementation of CAN bus architecture reduced the length of wiring in the BMW 850 by 1.25 miles, which in turn reduced its weight by well over 100 pounds. Based on the current cost of copper wiring, the total cost savings from the saved materials would amount to nearly $600. Not only that, but the speed of communication was increased, with signal rates ranging from 125 kbps to 1 Mbps.

Low cost of implementation is one of the main reasons that we're seeing widespread adoption of the CAN bus protocol. Less wiring means less labor and lower material costs for embedded engineers.

2. Built-in Error Detection

One of the key features of the CAN bus protocol is that it supports centralized control over electronic devices that are connected to the network. In the CAN bus physical layer, each electronic device is called a node. Nodes can communicate with other nodes on the network, and each node requires a microcontroller, CAN controller, and CAN transmitter. 

While each node is capable of sending and receiving messages, not all nodes can be communicating at once. The CAN bus protocol uses a technique called lossless bitwise arbitration to resolve these situations and determine which node should be given "priority" to communicate its message first. 

Error handling is built into the CAN protocol, with each node checking for errors in transmission and maintaining its own error counter. Nodes transmit a special Error Flag message when errors are detected and will destroy the offending bus traffic to prevent it from spreading through the system. Even the node that is generating the fault will detect its own error in transmission, raising its error counter and eventually leading the device to "bus off" and cease participating in network traffic. In this way, CAN nodes can both detect errors and prevent faulty devices from creating useless bus traffic. 

3. Robustness

Durability and reliability are key areas of concern when choosing a communication protocol for deployment in your embedded engineering projects. As you deploy your products into the live environment, you'll want to choose a communication protocol that is self-sustaining, with the ability to carry on operating for long periods of time without outside maintenance or intervention.

This need makes the protocol's error detection capabilities particularly advantageous, as they enable systems to identify and recover from errors on their own without intervention from an outside actor. There are five mechanisms for detecting errors in the CAN protocol:

  1. Bit monitoring
  2. Bit stuffing
  3. Frame check
  4. Acknowledgment check
  5. Cyclic redundancy check

CAN high-speed bus lines are highly resistant to electrical disturbances, and the CAN controllers and transceivers that communicate with electronic devices are available in industrial or extended temperature ranges. 

A CAN bus cable is typically vulnerable to the failure modes listed in the ISO 11898 standard, such as:

  1. CAN_H interrupted
  2. CAN_L interrupted
  3. CAN_H shorted to battery voltage
  4. CAN_L shorted to ground
  5. CAN_H shorted to ground
  6. CAN_L shorted to battery voltage
  7. CAN_L shorted to CAN_H wire
  8. CAN_H and CAN_L interrupted at the same location
  9. Loss of connection to the termination network

While most CAN transceivers will not survive these types of failures, some electronics manufacturers have constructed fault-resistant CAN transceivers that can handle all of them, though they may have a restricted maximum speed as a trade-off. Together, these features expand the suitability of CAN bus networks for applications in the most rugged and demanding environments.

4. Speed

When the CAN protocol was first defined, it was described in three layers: the object layer, the physical layer, and the transfer layer. Later, when the CAN specification was created, specific definitions for the physical layer were excluded. This gave engineers the flexibility to design systems with transmission mediums and voltages that suited their intended applications. Later, to help drive adoption of CAN devices and networks, standards were finally released for the CAN physical later in the form of ISO 11898-2.

There are currently two defined physical layer standards, two types of CAN protocol, each with its own advantages and disadvantages.

High Speed CAN offers signal transfer rates of between 40 kbps and 1 Mbps, depending on the length of the cable. CAN-based bus protocols like DeviceNet and CANopen use this physical standard to support simple cable connections with high-speed data transfer. 

Low Speed CAN network offers lower signal transfer rates that may start at 40kbps but are often capped at or near 125 kbps. The lower signaling rates allow communication to continue on the bus, even when a wiring failure takes place. While high-speed CAN networks terminate at either end of the bus line with a 120-ohm resistor, each device in a low-speed CAN network has its own termination. Low Speed CAN network exhibits greater fault tolerance and are vulnerable to fewer failure modes, but slower transfer speeds make them poorly suited to networks that require rapid and frequent communication.

5. Flexibility

To appreciate the flexibility of the CAN bus protocol in communications, we need to differentiate between address-based and message-based protocols. In an address-based communication protocol, nodes communicate directly with each other by configuring themselves onto the same protocol address. 

The CAN bus protocol is known as a message-based communication protocol. In this type of protocol, nodes on the bus have no identifying information associated with them. As a result, nodes can easily be added or removed (a process called hot-plugging) without performing any software or hardware updates on the system. 

This feature makes it easy for engineers to integrate new electronic devices into the CAN bus network without significant programming overhead and supports a modular system that is easily modified to suit your specs or requirements.

The Future of CAN Bus Protocol

CAN bus technology has been widely adopted across industry verticals. Thanks to its robustness, flexibility and the associated cost savings, we have seen CAN bus networks implemented in:

  • Trucks, buses and other passenger vehicles
  • Gasoline-powered and electric cars
  • Movie-set cameras and lighting systems
  • Gaming machines
  • Building automation equipment
  • Industrial automation and manufacturing equipment
  • Medical devices and instrumentation

In the future, the CAN bus protocol will remain the networking technology of choice for connecting electronic devices that require frequent, simple communications. Ethernet TCP/IP, a leading alternative to the CAN bus, still cannot deliver the same low resource requirements, low-cost implementations, reliability and error recovery capabilities of CAN bus networks. We will continue to see CAN networks deployed in IoT devices, industrial automation applications, connected medical devices and even more demanding and high-tech applications like satellites and spacecraft. 

How Total Phase Deals with CAN Protocol 

Are you planning to use the CAN bus protocol in an embedded engineering project? Whether you're building a product for the automotive, military, industrial, or aerospace sector, you'll need to invest in the right tools to help you program, monitor and debug your product.

Total Phase delivers tools with the functionality you need to send test transmissions on your CAN bus network or conduct non-intrusive monitoring to investigate network traffic, detect errors, and correct them as quickly as possible. With the Komodo CAN Solo Interface, embedded engineers can transmit data or monitor the bus. The Komodo CAN Duo Interface features two CAN channels, allowing engineers to emulate and monitor data from two CAN bus networks at the same time.

Want more insight into how our tools work for your specific application? Complete our Request a Demo Form Submission and we'll show you how easy it is to debug your next embedded CAN bus network project with the right tools from Total Phase.

 

Is there a Way to Adjust the Duty Cycle of the I2C Master Clock?

$
0
0

Question from the Customer:

I am using the Aardvark I2C/SPI Host Adapter. For my application, I need to adjust the I2C Master Clock Duty Cycle from 30% to 50%. How can I adjust the duty cycle?

Response from Technical Support:

Thanks for your question! The Aardvark adapter operates within the I2C specification, including the clock (SCLK); the duty cycle cannot be changed. However, a GPIO signal pin can be programmed with API commands to create the desired duty cycle.

For this application, we strongly recommend using the Promira Serial Platform with an I2C Active Level x Application (Level 1, Level 2). There are speed limitations for generating a clock pulse, which are described below.

Speed Limitations of Generated Clocks

There are latencies that apply to both the Aardvark adapter and the Promira platform:

  • Operating system overhead
  • Latencies caused by the bus

Aardvark-specific limitations include its maximum clock rate, and the API commands are handled in blocks, not queues. A block function waits for the response from the device. Also, the Aardvark adapter is a only full-speed USB device, which limits its speed over the USB connection to the PC. With all these factors, the maximum clock rate via GPIO would be far below 800kHz, which is why we recommend the Promira platform.

Promira-specific advantages include its higher clock rates (determined by the licensed application level) and processing commands are queued up and shifted to the device at once. The Promira platform is a High-speed USB device allowing it to communicate faster over the USB connection. There is also option of interfacing via Ethernet which further reduces latency.

Using the API for Generating SCLK of Specified Duty Cycle

Promira API

Using the Promira API Software, we recommend these API commands to create a 50% duty cycle. The GPIO signal is switched on and off.


ps_queue_gpio_set(queue, STATE0)
ps_queue_delay_ms(queue, on_time_in_ms)
ps_queue_gpio_set(queue, STATE1)
ps_queue_delay_ms(queue, off_time_in_ms)

For the target duty cycle of 50%, on_time_in_ms = off_time_in_ms. The STATE0 and STATE1 parameters are set in terms of bitmasks of the available GPIOs. For more information, please refer to the article How Do I Create a Clock Duty Cycle that is “Outside the Spec” for an I2C Device?

Aardvark API

Although programming with the Aardvark adapter is not recommended, an example is provided should the slower clock rate work for some applications.

The GPIO values are set using bit-masks in the aa_gpio_set(). The delay function can be replaced with aa_sleep_ms(). However, high frequencies of several hundred KHz cannot be achieved with the GPIO switching speed.

For more information, please refer to the API Documentation section of the Aardvark I2C/SPI Host Adapter User Manual.

For an easy comparison, here’s a table that summarizes the features of I2C/SPI Total Phase tools.

Comparison of Promira, Aardvark, Cheetah and Beagle

Additional resources that you may find helpful include the following:

For information about specific applications:

We hope this answers your question. Looking for more information? You can contact us and request a demo that applies to your application, as well as ask about Total Phase products.

Request a Demo

Why Am I Having 8K Compatibility Issues? Your HDMI Cable Might be the Culprit

$
0
0

The Latest in HDMI 2.1

8K televisions are the latest in-home entertainment technology, incorporating the most recent HDMI 2.1 specification, which has increased its bandwidth capabilities up to 48 Gbps and supports 8K60, 4K120, and even resolutions up to 10K.

HDMI 2.1 also incorporates new features that bolster image quality and bring gaming experiences to all new heights. New HDMI 2.1 cables offer dynamic HDR, enhancing image quality and color depth by rendering the image throughout the video, frame by frame.

HDMI 2.1 also improves video quality with enhanced refresh rates to improve the smoothness of the image, specifically for video, movies, and gaming. New features include Variable Refresh Rate (VRR) to reduce or eliminate lag during gameplay, Quick Media Switching (QMS) to help eliminate delays in screen display upon connection, Quick Frame Transport (QFT) to help reduce latency for less lags in gaming, and Auto Low Latency Mode (ALLM) to automatically establish the most fitting latency setting for smoother images.

Because of these newly added features focusing on smoother and more vibrant video experiences, gamers will be inclined to use 8K supported televisions and HDMI cables, but it’s quickly becoming apparent that even if a television supports 8K, a bad HDMI cable can cause major compatibility issues.

Obtaining an 8K Connection Might Not Be as Easy as They Say

A video circulating in the tech world, “They SAID this would be EASY... - Gaming at 8K 60fps”, by Linus Tech Tips, shows a complicated but rather humorous account of Linus attempting to connect a new Sharp 8K TV to a gaming system, which is supposedly as easy as “plug and play”, but he quickly realizes that it may not be as simple as it is made out to be.

During the video, we see a back forth conundrum of trying various different set ups to correct the connectivity issues that arise as he tries to establish a quality image. However, even after multiple attempts, connecting the 8K TV to the gaming system still proved to be unattainable. What ends up finally solving the connectivity issues was swapping out the cables to include a native HDMI to DisplayPort cable, rather than including adapters within the setup. While using the adapters should technically work the same, it goes to show that certain cables simply do not always perform as expected, even when they are certified.

Although video sources and 8K televisions can often appropriately handle the latest HDMI 2.1 specification, HDMI cables can still cause major interoperability issues, resulting in poor image quality, or even none at all.

Why do even certified cables not always function as expected?

Why is that some HDMI cables, even ones that are certified, cause image and sound issues? These issues typically stem from within the development and production process. With many added variables including the potential for human-error while assembling cables, replicating a perfect cable each time is almost impossible; and while many cable manufacturers test their cables off the production line, they typically only perform functional testing, meaning they plug the cable into a television, and if it displays an image, it is considered to pass.

This type of functional testing is not stringent – the aforementioned variable refresh rates and resolutions used to ensure the picture always looks smooth for the consumer could compromise resolution to maintain smoothness. That means the simple appearance of an image on the test screen is not necessarily a guarantee that the cable actually meets the HDMI 2.1 specification for bandwidth; human eyes in a factory environment wouldn’t discern a 2K picture from a 4K picture from an 8K picture. Furthermore, simply just obtaining cable certification does not guarantee that each individual cable is up to standard during the actual production process. Below the surface, there are many components within the cable that need to be tested that include ensuring all pins and wires are correctly assembled, measuring the DC resistance of the cable, and verifying the signal quality.

Many manufacturers do not always look to ensure these are up to standard, mainly due to the high cost of testing and expensive overhead. Total Phase has introduced a product, the Advanced Cable Tester v2, to combat this known issue happening within cable factories.

The Advanced Cable Tester v2 allows for production quality control with its complete set of testing.

The Advanced Cable Tester v2 Allows for Production Quality Control

The Advanced Cable Tester v2 is a one-stop cable tester that will allow users to determine the quality, safety, and cable functionality with just one tool and a one-time test. Once a user plugs in the cable, they can quickly determine in just a few seconds if the cable is up to standard. Our comprehensive tests include pin continuity testing preventing shorts and opens, DC resistance measurements for non-data lines, and signal integrity testing up to 12 Gbps. Our enhanced signal integrity testing also provides eye-diagrams and includes masks per the appropriate insertion loss cable specification, and if any portion of the eye diagram touches the mask, the cable will be flagged as a failure.

Because we’ve designed this tool to be a cost-effective cable testing solution in a high output environment such a cable factory, individual quality control is now feasible. Taking the right measures to perform individual quality control using the Advanced Cable Tester v2 will allow complete assurance of quality while also offering low cost consumables during the testing process.

The Advance Cable Tester v2 tests video cables including HDMI and DisplayPort cables:

HDMI Type A to HDMI Type A (HDMI 2.1 and earlier version specifications) using the HDMI-A to HDMI-A Connector Module

ACT v2 Connector Module: HDMI-A to HDMI-A allows testing of HDMI cables with the Advanced Cable Tester v2

DisplayPort to DisplayPort (DisplayPort 2.0 and early version specifications) using the DisplayPort to DisplayPort Connector Module

ACT v2 Connector Module: DisplayPort to DisplayPort allows testing of DisplayPort cables with the Advanced Cable Tester v2.

For more information on how the Advanced Cable Tester v2 can help you achieve individual quality control or other testing requirements in your environment, please email us at sales@totalphase.com.

How Do I Start Using the Beagle I2C/SPI Protocol Analyzer to Monitor the MDIO Bus?

$
0
0

Question from the Customer:

This is my first project – can you help me get started? I am using the Beagle I2C/SPI Protocol Analyzer to monitor the MDIO bus. Which pins I should connect it to MDC/MDIO?

Response from Technical Support:

Thanks for your question! Here is the information to help you get started.

How the Beagle I2C/SPI Analyzer Works with MDIO

The Beagle I2C/SPI analyzer non-intrusively monitors the Management Data Input/Output (MDIO) up to 2.5 MHz at signal level 3.3V. To monitor an MDIO bus that has a different voltage level, you can use Total Phase Level Shifter Board. Here are the pin numbers of the signals you asked about.

  • MDIO signal, pin 8, is the bidirectional management data input/output. MIDO is used to transfer data between the Station Management Entity (STA) and the MDIO Manageable Devices (MMD)..
  • MDC signal, pin 7, is the management data clock. This is a control line that is driven by the STA, and synchronizes the flow of the data on the MDIO line.

The Beagle I2C/SPI analyzer monitors MDIO for both clause 22 and clause 45.

MDIO Clause 22 and Clause 45

Clause 22 and clause 45 are parts of the IEEE 802.3 specification.

Clause 22 defines the MDIO communication basic frame format, as shown below.

MDIO Clause 22 frame

MDIO clause 45 communication frame format table

Clause 45 supports low voltage devices down to 1.2V, and extends the frame format, providing access to more devices and registers.

MDIO clause 45 communication frame format

MDIO Clause 45 table

For more information, please refer to the knowledge base article MDIO Background. We also recommend Use The MDIO Bus To Interrogate Complex Devices, published by Electronic Design Magazine

How to Capture MDIO Signals

To accurately capture MDIO signals, the sampling rate must be set properly. The minimum requirement for the sampling rate is twice the bus bit rate. For details, please refer to the following:

We hope this answers your question. Additional resources that you may find helpful include the following:

Looking for more information? You can contact us and request a demo that applies to your application, as well as ask about Total Phase products.

Request a Demo

Total Phase at ESC Silicon Valley 2019: SPI, I2C, and CAN at Their Finest

$
0
0

This past week, Total Phase exhibited at the Embedded Systems Conference - Silicon Valley in Santa Clara, CA, which was also combined with the Drive World Conference. Given the audience, there was a huge focus on autonomous driving with very interesting case study presentations being conducted in the Expo Hall.

On display at the Total Phase booth were our Promira Serial Platform and our Komodo CAN Duo Interface which continually received praise and great remarks for their ease of use and affordability. We also showcased our Advanced Cable Tester v2 as momentum for this new product continues to build.

Total Phase at ESC Silicon Valley 2019

As always, many happy existing and past customers made time to stop by our booth which we greatly appreciate!

If you are finding challenges with effectively debugging in common serial protocols such as I2C, SPI, USB, CAN, A2B, or eSPI, or you are one of many companies who are concerned with the safety and reliability of next generation USB and video cables, then please contact us at sales@totalphase.com – we’d love to help.

DisplayPort 2.0 is the Latest DisplayPort Spec – How does it Compare to DisplayPort 1.4?

$
0
0

DisplayPort (DP) 2.0 is the newest specification released by VESA (Video Electronics Standards Association) in June 2019. This new release comprises of a number of new features and upgrades from the previous DisplayPort 1.4 spec. Let’s get to know the differences between DisplayPort 1.4 vs DisplayPort 2.0, and learn what we can expect with the latest release.

Introduction of DisplayPort 1.4

DisplayPort 1.4 brought many features into the picture, including support for 8K resolution, Display Stream Compression (DSC) 1.2, and High Dynamic Range (HDR).

First and foremost, one of the major features DisplayPort 1.4 has introduced is its support for enhanced resolutions, including supporting display resolutions of 8K at 60 Hz and 4K at 120 Hz. Building upon the previous architecture from DisplayPort 1.3, DisplayPort 1.4 also includes support for a maximum link bandwidth up to 32.4 Gbps, transmitting 8.1 Gbps over 4 lanes.

DisplayPort 1.4 incorporates the previously adopted Display Stream Compression (DSC) standard, but it is the first spec to include the newest DSC version 1.2, which supports a wider range of display applications, including externally connected displays like PC monitors and televisions. DSC 1.2 also supports native 4:2:0 and 4:2:2 coding, which eliminates the conversion of pixels into RBG components and in turn enables a more efficient compression of incoming pixels. Version 1.2 also supports up to 16 bits per color, creating high-quality color depth content. With these features provided with DSC 1.2, HDR is able to function more efficiently.

DisplayPort 1.4 uses the video transport compression feature, and because of this, this spec is can be built over USB Type-C connector, allowing for high-definition video and SuperSpeed USB, as well as HDR and 8K over either a native DP cable or USB Type-C connector through the DisplayPort Alt mode.

Other features within DP 1.4 includes Forward Error Correction on top of DSC 1.2, which allows for error-free video transport to external displays. DP 1.4 also includes HDR meta transport and expanded audio transport on 32 audio channels and 1536Hz sample rate.

Upgrading to DisplayPort 2.0

While DisplayPort 1.4 has its stake in the market, we will soon be seeing a wider adoption of the latest DisplayPort 2.0 spec from many companies. What can we expect with the newest release?

DisplayPort 2.0 introduces three different bit rates per lane over four lanes, including 10Gbps, 13.5Gbps, and 20Gbps. This means that DP 2.0 can in theory triple its max link bandwidth up to 80 Gbps. However, at this time, VESA is focusing on creating passive cables supporting UBHR 10 (Ultra High Bit Rate), delivering a total of up to 40 Gbps. By increasing the bandwidth to this length, it is stated by VESA, that “DP 2.0 is the first standard to support 8K resolution (7680 x 4320) at 60 Hz refresh rate with full-color 4:4:4 resolution, including with 30 bits per pixel (bpp) for HDR-10 support.”

 DisplayPort 2.0 will allow multiple display configurations, supporting various resolutions and refresh rates, even up to 16K at 60 Hz.

Image by cdu445

DisplayPort 2.0 brings all new enhancements for its display resolution support, going beyond 8K, and supporting virtual reality/augmented reality displays. VESA classifies various configurations of resolution and refresh rates that can be supported with the new spec, including:

Single display resolutions

  • One 16K (15360×8460) display @60Hz and 30 bpp 4:4:4 HDR (with DSC)
  • One 10K (10240×4320) display @60Hz and 24 bpp 4:4:4 (no compression)

Dual display resolutions

  • Two 8K (7680×4320) displays @120Hz and 30 bpp 4:4:4 HDR (with DSC)
  • Two 4K (3840×2160) displays @144Hz and 24 bpp 4:4:4 (no compression)

Triple display resolutions

  • Three 10K (10240×4320) displays @60Hz and 30 bpp 4:4:4 HDR (with DSC)
  • Three 4K (3840×2160) displays @90Hz and 30 bpp 4:4:4 HDR (no compression)

When using only two lanes on the USB-C connector via DP Alt Mode to allow for simultaneous SuperSpeed USB data and video, DP 2.0 can enable such configurations as:

  • Three 4K (3840×2160) displays @144Hz and 30 bpp 4:4:4 HDR (with DSC)
  • Two 4Kx4K (4096×4096) displays (for AR/VR headsets) @120Hz and 30 bpp 4:4:4 HDR (with DSC)
  • Three QHD (2560×1440) @120Hz and 24 bpp 4:4:4 (no compression)
  • One 8K (7680×4320) display @30Hz and 30 bpp 4:4:4 HDR (no compression)

DP 2.0 also exploits its higher bandwidth capabilities over the Type-C connector providing a better user experience. Because of the increase in bandwidth, DP 2.0 can allow users to operate both SuperSpeed USB and high-resolution video concurrently.

Lastly, DP 2.0 also introduces the Panel Replay feature, which allows for increased power savings in smaller devices, or all-in-one PC’s with higher resolutions by optimizing display refreshes.

How to Ensure DisplayPort Cables are up to Spec

Since DP 2.0 is newly introduced, quality and interoperability issues may arise as it’s rolled out into home entertainment systems including gaming consoles, televisions, PC’s, and the cables themselves. It’s not only important to ensure cables meet specification requirements, but to also ensure complete quality control over cables during the development and production process. Additionally, simply performing functional testing will not catch errors underneath the surface that can greatly affect the quality of the cable, and testers may not be able to determine whether or not the cable meets display resolutions standards with the naked eye.

Total Phase’s Advanced Cable Tester v2 supports testing a variety of cables, including widely adopted video cables such as HDMI and DisplayPort. Specifically, this cable testing solution can even test DisplayPort cables that include the latest DisplayPort 2.0 specification (and earlier) to ensure it’s up to standard and quality-made. The Advanced Cable Tester v2 comprehensively tests for a variety of components within DP to DP cables including pin continuity to detect shorts and opens, DC resistance on all non-High Speed and SuperSpeed wires, and signal integrity to determine the quality of the signal on data lines up to 12.8 Gbps per channel.

The Advanced Cable Tester v2 supports testing DisplayPort to DisplayPort cables.

With the Advanced Cable Tester v2, testing each cable quickly and thoroughly is now achievable, as our cable tester is designed to fit factory and production line settings, economizing the testing and overhead costs.

For more information on how the Advanced Cable Tester v2 can help ensure quality over your video cables, please contact us at sales@totalphase.com.

How Do I Interface the Beagle I2C/SPI Protocol Analyzer to SPI Flash Slave and Master Devices?

$
0
0

Interfacing to hardware for data from SPI devices Image source: Geralt

Question from the Customer:

We are starting to use a Beagle I2C/SPI Protocol Analyzer to capture data that is sent to and received from a SPI serial flash device (25L64 type) by an MCU.

  • We have the pin-out for the SPI flash memory.
  • We also have the 10-pin header pin signal assignment list from the Beagle manual.
  • The target circuit provides its own power: 3.3V
  • We are using the Data Center Software.

We'd like to solder-tack a few wires from the Beagle I2C/SPI analyzer to the SPI memory and capture the bus data. We prefer not to remove the device from the PCB.

Our question - how are the signals mapped from the analyzer to the device? We’ve tried many variations of connecting the Beagle analyzer to the SPI device, but so far, we’ve been “missing” the target.

Response from Technical Support:

Thanks for your questions! How to connect the signals depends on whether the SPI device is set up as a slave or a master, and what the device data sheet specifies for WP and HOLD.

Connecting to a Slave SPI Flash Device

  • Connect SCK to CLK
  • Connect MOSI to DI
  • Connect MISO to DO
  • Connect SS to CS
  • Connect GND to GND
  • WP and HOLD tied to GND or VCC, according to data sheet

Connecting to a Master SPI Flash Device

  • Connect SCK to CLK
  • Connect MOSI to DO
  • Connect MISO to DI
  • Connect SS to CS
  • Connect GND to GND
  • WP and HOLD tied to GND or VCC, according to data sheet

For more details, please refer to the Beagle I2C/SPI/MDIO Protocol Analyzer section of the Beagle Protocol Analyzer User Manual.

Additional Recommendations

Here are some articles about using the Beagle I2C/SPI analyzer that may interest you:

Communicating with Embedded Devices

Should you need to actively communicate with  your setup, such as send commands and program devices, we recommend using the Aardvark I2C/SPI Host Adapter. We have a video that shows examples of using both the Beagle analyzer and the Aardvark adapter for prototyping an embedded system.

We hope this answers your questions. Additional resources that you may find helpful include the following:

Looking for more information? You can contact us and request a demo that applies to your application, as well as ask about Total Phase products.

Request a Demo


Top 10 Most Common IoT Security Issues of 2019

$
0
0

There’s no denying that the Internet of Things (IoT) has expanded and improved technology. Not only does it help businesses gauge customer satisfaction, medical professionals gain a more accurate read on patient symptoms, or a runner track how many calories she burns as she improves her mile time, but it has also led professionals across a range of industries to innovate in ways that can’t compare to previous years.

With this more interconnected form of technology, however, comes more responsibility and a greater need to keep security strong. When it comes to maintaining safety precautions for your company, you’ll want to have a clear idea of what the issues are concerning the IoT.

How far has IoT come?

While a Coca-Cola machine at Carnegie Mellon University was one of the first machines to use the Internet of Things in the 1980s, it has come a long way since then. People with connected devices have access to a lot more than being able to tell if a soda machine has their carbonated drink of choice. Apple watches can track all sorts of health statistics, pacemakers allow doctors a more detailed study of patients’ hearts, and cars can become hot spots with real-time GPS and music streaming from your cell phone, just to name a few. 

To give you an idea of how much the IoT has changed in the past 10 years, in 2009, there were about 900 million connected things in the world. By 2020, that number will be closer to 20 billion.

Whether you’re just introducing this helpful and nuanced technology to your industry, or you've had it in use and are concerned about where it’s headed, we’ve compiled a list of what to keep in mind as the technology changes.

1. Inadequate device updates

As with any newer or constantly changing area of technology, security updates and patches are difficult to install but necessary. Why? There simply needs to be a standardized method of implementing such software updates.

There are vulnerabilities in any form of software. Think about how often your phone and computers notify you that you must install the latest updates. The purpose is to remove bugs that can create defects in the software as well as ensuring that your devices have the latest security installed.

You might notice that the more consistently you update your phone or laptop, the less likely those devices are to be victims of malware attacks. Now imagine all your connected devices. It’s not as easy as hitting the update button and waiting for your phone or laptop to restart.

Though engineers are working to develop a standardized method of firmware updates to work across the entire Internet of Things, it is still a work in progress and IoT network security issues that exist today can make you more susceptible to malware attacks.

For the embedded systems developer working on IoT products, the development of quality firmware and a process to implement it is important. Agility and speed are also important. With Total Phase products, developers can increase velocity without sacrificing quality. For an introduction to the benefits of our USB development tools, see our Shorten Time to Market with Affordable USB Development Tools article.

2. Inadequate device testing

Not only do devices need to be tested, but so too do the networks they run on and the infrastructures they are built on. Weak network infrastructures and inconsistent internet connection negatively impact how efficiently ­­— and effectively — smart devices work.

Because of unstable network infrastructure, inconsistencies in internet connectivity and an overabundance of IoT platforms, device testing can get really tricky. Software is what keeps IoT technology running effectively, but each device that the IoT is connected to has its own hardware. Further, there is even more variation because different operating systems and firmware exists.

Changing passwords, enforcing IoT protocols, and creating testing strategies that work across a variety of platforms is critical.

The sheer number of combinations of hardware and software platforms makes it virtually impossible to test the communication and connectivity of all of the combinations. Analyzing the information from your end-users, however, will give you a better idea of which combinations are the most common, and you can start at least by testing those.

Total Phase products enable developers and QA teams to improve their testing of IoT devices. For example, the Aardvark I2C/SPI Host Adapter enables engineers to interface with an embedded system using I2C and SPI via USB connection. Similarly, the Beagle I2C/SPI Protocol Analyzer enables debugging of I2C, SPI, or MDIO based embedded systems.

For a crash course on testing embedded systems like those found in IoT, review our Benchtop Testing for Embedded Systems Guide.

3. Authenticating passwords on devices

IoT security issues can be avoided if devices require users to engage in best practices. Engineers can help encourage users to secure their devices in a number of ways. For example, forcing a change of default password on the first login eliminates low-hanging fruit for attackers. Similarly, one-time passwords (OTP) can mitigate risk. With OTP, the potential damage from a hacker learning a password at any given time is limited due to the fact that a password cannot be reused. For M2M (machine to machine) communication using protocols like MQTT, certificate-based authentication helps ensure only trusted devices can communicate.

4. Malware Attacks 

IoT has become a popular target for hackers.  Case in point: Silex malware was able to infect so many IoT devices in large part thanks to easy to guess passwords. IoT engineers can mitigate risk to their users by enforcing strong authentication policies (we can't stress this enough: changing default passwords is important and encryption protocols). Types of malware attacks to be aware of with IoT include mining of cryptocurrency, router and storage device infections, and distributed denial of service (DDoS).

5. Implementing data privacy

Legislation like GDPR (General Data Protection Regulations) makes data privacy more important than ever. Not only do IoT engineers have an ethical obligation to keep data private, but regulations like GDPR also make doing so a legal requirement. IEEE calls out a number of questions to ask to help tackle the data privacy in IoT, including:

  • What personal data does your IoT device collect?
  • Where and how does your device store that data?
  • Who has access to the data?
  • What is it used for?
  • How long will it be stored?
  • How will individuals be notified if their data is leaked?

While capturing data is a big part of IoT, anonymizing personally identifiable information (PII), using strong encryption, and only keeping relevant PII for a reasonable amount of time are important parts of secure and responsible IoT development.

6. Data security and privacy

The challenges surrounding PII lead us to this point. How do you keep data secure and private from an engineering perspective? Collecting PII only when necessary is a start.  Only using secure data transport methods like HTTPS or SSH to send data across a network is another important step. Keeping encryption protocols up to date for data at rest (and in transit) is important as well. Today that means not using protocols like DES or SSL v3.0 and instead of using protocols like SHA-256 and TLS 1.2. Similarly, using the principle of least privilege to only give users access to the data they must have, limits some of the potentials for unintended exposure of PII.

Common IoT Security Issues of 2019. Image Courtesy of Pixabay

7. Insecure communication

Insecure network communication using cleartext protocols make IoT devices much easier to hack. Therefore, cleartext protocols must be avoided at all costs. Using a cleartext protocol to transmit data enables anyone with network access and a packet sniffer to read the data transmitted to and from IoT devices. For a quick breakdown of protocols NOT to use, and what protocols to use instead, see below.

  • Do NOT use Telnet, instead use SSH
  • Do NOT use HTTP, instead use HTTPS
  • Do NOT use FTP, instead use SFTP or SCP
  • Do NOT use SNMP v1 or v2c, instead use SNMP v3

8. Vulnerabilities and attacks

New vulnerabilities and exploits are discovered all the time. Staying up to date on the latest CVEs and issuing patches and security updates when appropriate help keep your devices secure. The world of IoT security moves fast, so enabling users to patch devices in the field is important. Additionally, using security scanners to scan your devices for overlooked exploits help ensure you avoid leaving known weaknesses exposed.

9. Complex systems

IoT devices exist in complex networks. The market is also growing and expanding rapidly, leading to new use cases and integrations in a short period of time. The larger this ecosystem grows, the larger the attack surface for IoT devices becomes. Whenever a new feature or protocol is implemented, it poses a potential data security risk. IoT engineers must ensure security is taken into account both at the device-level and network-level.

10. Learning to predict and prevent security issues

Proactive preventative measures go a long way in enabling secure IoT development. This means IoT development teams need to emphasize security throughout the product life cycle. This requires using many of the suggestions we have mentioned throughout this article. For example, by forcing a user to change the default password and also enforcing the use of strong passwords, IoT engineers can prevent many attacks that assume default passwords are enabled. However, hackers learn and adapt over time, so using regular vulnerability scans, staying up to date on security, and implementing best practices like the principle of least privilege can help mitigate risk and improve security posture.

Want to learn more? Contact the Total Phase Team of Experts!

If you find yourself needing support in the area of IoT protocol implementation or security practices, Total Phase is well equipped with the knowledge you need. Whether it’s to further discuss something you read here or if questions arose that weren’t answered here, reach out. Our sales team will ensure your security concerns are taken care of.

Why Do I See Different DC Resistance Results when I Flip a USB Type-A to USB Type-C Cable?

$
0
0

Question from the Customer:

With the Advanced Cable Tester V2, I am testing samples of USB Type-A to USB Type-C cables from various suppliers. In many cases, I see a DC Resistance failure for “GND+SHIELD A-Side Link”. I captured the results of a single cable with normal and flipped orientations (CC is on the same side as DP/DM, or CC is on the other side):

DC Resistance Measurement of USB cable

DC Resistence Measurement of USB cable

With these two orientations, I noticed that the Expected Max (Ω) column (second column from the right in the screen captures) shows significantly different values. What can you tell me about these results?

  • Is this correct, or should the Expected Max (Ω) be the same value for each test?
  • Exactly what is this test doing?
  • Why are the measurements so different?

Response from Technical Support:

Thanks for your questions! There are shield and ground specifications that the Advanced Cable Tester v2 (ACT v2) is built to test and confirm such specifications. Following are details about what is being tested and why

Type-C Specifications with Legacy Cables

Type-A and Type-B are legacy cables. Here is information about running tests with a USB A-Side or USB B-Side GND+SHIELD Link.

The Type-C specification requires that all legacy cables have a connection between GND and the connector’s SHIELD, and that this connection is made within each connector. This is addressed in the design, and difficult to measure externally. The ACT v2 uses three measurement to check if a cable meets this requirement.

Measurements for Confirming Cable Specifications

Three measurements are taken to verify the ground-shield specifications:

  • Rgnd_wire: GND pin to GND pins, through the cable
  • Rshield_wire: Shell to Shell, through the cable’s SHIELD
  • Rgnd_to_shell: GND pin to Shell, on the A-side or B-side

When the GND+SHIELD connection is correct, the resistance of Rgnd_to_shell is substantially less than the resistance of the Rgnd_wire and the Rshield_wire.

In the case where there is not a GND+SHIELD connection within the legacy plug, but there is a connection within the Type-C plug, the path for Rgnd_to_shell is essentially the entire path through Rgnd_wire + Rshield_wire.

Pass/Fail Criteria

ACT v2 takes these three measurements, and uses the following criteria for a pass:

  • Rgnd_to_shell < (Rgnd_wire + Rshield_wire) / 2

The division by 2 provides some margin for the measurement. Please note, in cases where the shell contact resistance is far greater than the wire resistances, a false failure may occur.

Why Flipped Cables Show Different Results

The pass criterion is Rgnd_to_shell < (Rgnd_wire + Rshield_wire) / 2, which indicates that the pass criterion is dependent on the actual measurements of Rgnd_wire + Rshield_wire. Because of this criterion, two different thresholds may be applied for pass/fail for two different tests.

Using the Advanced Cable Tester V2 for Other Tests

If you need additional information, we also have a video about using the Advanced Cable Tester with various cable profiles and examples of tests to run:

 

 

You might also be interested in our recent articles about the importance of effective and accurate cable testing:

Additional resources that you may find helpful include the following:

We hope this answers your question. Need more information? You can contact us and request a demo that applies to your application, as well as ask questions about our Advanced Cable Tester and the available options, and other Total Phase products.

Request a Demo

How Do I Talk to an I2C Translating Switch with an 8-bit Address?

$
0
0

Question from the Customer:

I am working with the Aardvark I2C/SPI Host Adapter and the Aardvark Software API (Python). I am using the command aaspi_eeprom.py 0 100 read 0 0 4096. The I2C device I am working with is a PCA9546 translating switch: the address is 0xE0 and the EEPROM I am talking to connects to port 1 of the switch.

What changes do I need to make?

Response from Technical Support:

Thanks for your question! Many functional examples are included with our API packages. However, the programs provided are to read, program, and erase AT25080A SPI EEPROM and AT24C02 I2C EEPROM devices, which are used on our accessory board, the I2C/SPI Activity Board.

Our API can be modified for your devices. Looking at the PCA9546 data sheet, here are guidelines to help you get started.

Getting Started with the Device Data Sheet

For the device PCA9546 with address 0xE0, we recommend looking at our aai2c_eeprom example file, and then modify that program to perform the required read and write commands for your setup.

Looking at the data sheet, here is some key information:

The PCA9546 device address, 0xE0, is an 8-bit address. All Total Phase I2C products follow the standard 7-bit addressing and 10-bit addressing conventions.

How to Communicate with 8-bit Addresses

The slave address used should only be the top seven bits. For your requirement, the device address E0 (8 bit address) can be represented as 70 H(7 bit address). For more information on addressing schemes, please refer this knowledge base article, 7-bit, 8-bit, and 10-bit I2C Slave Addressing.

API Example Script

aai2c_eeprom.py is an example API script is used to program, erase, and read AT24C02 I2C EEPROM. Here is an overview of the commands to use.

aai2c_eeprom PORT BITRATE read SLAVE_ADDR OFFSET LENGTH
aai2c_eeprom PORT BITRATE write SLAVE_ADDR OFFSET LENGTH
aai2c_eeprom PORT BITRATE zero SLAVE_ADDR OFFSET LENGTH
    • The read, write, and erase operations are performed based on the command.
    • The "zero" indicates erase operation, where the EEPROM is programmed with all 0s in specified length of bytes.

Here is an example of using the commands for reading from your device:

Setting the bitrate 400 kHz:

aai2c_eeprom 0 400 read 0x50 0 32

Reading from the device:

0000: 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f 10
0010: 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f 00

Please let us know if you have additional questions.

Additional resources that you may find helpful include the following:

We hope this answers your questions. If you have other questions about our software, host adapters or other Total Phase products, feel free to email us at sales@totalphase.com. You can also request a demo specific for your application.

Request a Demo

Hidden Damage to Your Phone’s Battery Due to Wireless Charging? Sticking to Quality Cables May be the Way to Go.

$
0
0

Potential Damage to Your Phone’s Battery Due to Wireless Charging

What is Wireless Charging?

Wireless charging, also known as inductive charging, is a newly introduced approach to charging mobile-phone batteries. This method uses induction from the electromagnetic coils in the phone and charging platform to transfer power.

While this new concept is convenient for phone users, it may actually be causing phone batteries to weaken and degrade over time. Because both phones and chargers naturally radiate heat, when a phone is placed directly on the charging platform, heat can transfer, and generally, there is no safeguard to prevent excessive heat exposure between the two.

In the article, “Bad News: Wireless Charging Is Probably Killing Your iPhone’s Battery” by iDropNews, the author mentions how this method of charging is not only potentially damaging, the amount of heat radiation can amplify if coils are misaligned within the phone or charging pad. They note how wireless charging systems may increase the amount of power to compensate for this misalignment.

Studies Prove that Inductive Charging can be Damaging

Researchers have even done some studies that prove these hypotheses to be likely. One study determined that properly aligned coils within a phone will cause the iPhone temperature to raise up to seven degrees hotter, while another study showed that misaligned coils would also raise the temperature up seven degrees, but at a much quicker rate, and the temperate would remain at this caliber for longer.

The article also provided some tips to help prevent the potentially harmful effects of wireless charging, including ensuring coils are aligned, not charging in heat, avoiding phone use while charging, and using a fan.

Sticking to Quality-Made Cables Can Save your Phone

While there are safety precautions in place, simply avoiding wireless charging altogether and using wired charging may help protect against unwanted battery damage.

Using compliant cables as opposed to wireless charging will help ensure that your devices are properly and safely connecting to each other, ensuring these devices are maintaining their proper function. With that said, it’s still vitally important to use quality-made cables when charging phones. Our blog, “What are the Dangers of Manufacturing and Using Untested USB Cables?”, explains the many consequences of using and manufacturing poorly-made cables.

 Unlike wireless charging, charging with compliant cables can end up saving your phone’s battery life.

Photo By Negative Space

The good news is that using known-good cables is easier than ever with the help from Total Phase. Our cable testing solution, the Advanced Cable Tester v2, is the only tool of its kind that performs a complete, comprehensive analysis on cables ranging from USB, Apple Lightning, and video – covering some of the most popular, widely-used cable types on the market. With just one tool, users can test a variety of cable specifications, flagging any inconsistencies and unsafe measurements within pin continuity, DC resistance, E-Marker, quiescent current, and signal integrity. For Lightning cables specifically, users can also run Lightning specific tests including over-voltage, Lightning plug bring up, and quiescent current consumption.

Because this tool is designed for mass-scale production testing, this tester will provide a complete report of a passing or failing cable in just seconds, so maintaining a cost-effective and efficient quality control system is achievable.

For a quick demonstration on how to use the Advanced Cable Tester v2 and an overview of the criteria that is tested, please check out our video: Testing USB, Apple Lightning, and Video Cables with the Advanced Cable Tester v2.

If you have further questions on how the Advanced Cable Tester can help ensure safety and quality in your cables, please email Total Phase at sales@totalphase.com.

What is FPGA? Introduction to Field Programmable Gate Array

$
0
0

What is FPGA?

A field-programmable gate array, or FPGA for short, is a special type of semiconductor device that can be programmed by a customer, a product designer, or embedded systems engineer after the hardware has been manufactured and sold. An FPGA gets its name from two of its defining properties: field-programmable refers to the fact that these integrated circuits can be programmed in the field and a gate-array is a reference to the two-dimensional array of logic gates that make up the circuit.

An FPGA can be described as an integrated circuit, a set of circuits that includes wiring, programmable logic gates, and registers. Logic gates are the basis for the functionality of digital circuits. They perform logical operations using Boolean algebra on electrical pulses that represent the binary language of computers. FPGAs allow embedded systems engineers to reprogram these logic gates with customized digital logic on an as-needed basis, updating or modifying the functionality of the circuit to match requirements in the field.

FPGA vs ASIC - What's the Difference?

ASIC is an acronym for application-specific integrated circuit. This type of integrated circuit is configured for a specific application, and they maintain that same functionality throughout their entire operational life. The central processing unit (CPU) in your home computer is an ASIC - it will function as a CPU for its whole life.

The key difference between an FPGA and the CPU in your home computer is that your CPU cannot be reprogrammed. In an ASIC, the digital circuitry consists of permanently connected logic gates and flip-flops in silicon. These logic gates cannot be configured or programmed after manufacturing - you simply get what you get.

In contrast, an FPGA can be configured in the field to deliver new features and functions, adapt to changing design standards, or meet the requirements of specific applications, even when the FPGA is already installed in a system. FPGAs offer increased flexibility in design with reduced costs and a decreased likelihood of delays in product design.

Field Programmable Gate Array (FPGA) Applications

FPGAs were first developed in the 1980s, but they have evolved and changed significantly since their initial inception. The first FPGA circuits were large and took up a lot of space, but as circuit components continued to get smaller, hardware engineers were able to incorporate an increasing number of devices. This ultimately allowed for more complex functionality and faster arithmetic on FPGAs, which opens up a whole new set of FPGA applications for hardware developers and engineers.

The FPGAs in use by embedded systems designers today are highly sophisticated and feature-rich. They include mixes of configurable static random access memory (SRAM), logic blocks, routing, and high-speed input/output pins. FPGAs may also include other hard intellectual property (IP) such as memory blocks, transceivers, protocol controllers, calculating circuits, and even entire CPUs. While these IP features are typically not configurable, they do provide additional functionality with lower cost and power consumption than other types of circuits.

Build Your Own CPU with FPGA

With the added sophistication of today's FPGAs, embedded engineers can build an entire processor using an FPGA. There are several electronics manufacturers encouraging their customers to use their design tools and framework to develop a customized 32-bit CPU using an FPGA. Customized central processing units can be uniquely configured and optimized to run a specific application, and may even do so faster than the best available mass market processor.

Of course, building your own CPU from scratch requires a level of expertise in device design and logic programming. You will need to determine what peripherals the processor has to manage, define the communication protocols that will be used to interact with those peripherals, determine how much logic and memory is needed for data and programs, and create an optimized instruction-set for implementation.

Then it's time to create and compile your code onto the FPGA - and you'll also need to build a custom compiler if you plan to write any software that will run on your new processor.

FPGA Application in Wireless Data

The widespread proliferation and availability of wireless data has had a significant impact on how we access information and technology each day. New technologies like 4G and 5G networking protocols are enabling faster data transfer rates and making it easier to send more data through the internet from more devices at higher speeds than before. As new technology standards emerge for wireless data transfer, FPGAs can help wireless carriers reduce the expenses associated with upgrading their telecommunications infrastructure.

FPGAs today are equipped with built-in low-latency modules that work well with advanced networks, along with tools that help mobile carriers leverage the key advantages associated with FPGAs: lower power consumption and cost, with greater productivity and performance.

FPGA Application in the Automotive Sector

Driven by a need to increase safety performance and customer convenience, auto-makers have integrated electronics and computerization into virtually every aspect of automotive design. From safety features like airbags and automatic braking systems to ease-of-use features like navigation systems and 360-degree cameras, vehicles today use microcontrollers to integrate sensory information, interpret the environmental conditions and trigger autonomous responses or relay information to the driver.

Driver assistance cameras are a common area of FPGA application in the automotive sector. Camera systems for vehicles require high-speed video processing, the complex fusion of data from sensors on the vehicle, and real-time data analytics that help determine the best course of action. These systems depend on data from radar and laser sensors, which provide data in different formats that may not integrate well on some architectures.

Automotive FPGA Application

Traditional digital signal processors or microcontrollers are not sufficient for these high-tech camera systems, as they lack the necessary capacity and power output to perform data analytics and real-time video processing at the same time. There may also be a need for high dynamic range (HDR) processing to support modern cameras that can equally visualize both darkly and brightly lit areas of a scene.

As an alternative to traditional microcontrollers, engineers can integrate the whole camera system using a single FPGA. The FPGA can be programmed with a customized configuration to manage the unique computing requirements of the camera system, including using parallel processing engines to satisfy functions that require a lot of computing power.

FPGAs in High Performance Computing

FPGAs are increasingly being used in high-performance computing, despite the fact that CPU's may run at least an order of magnitude faster. This change has to do with the fundamental architectural differences between FPGAs and manufactured processing chips.

CPUs process requests in sequence. They break an algorithm into a sequence of operations and execute those operations one at a time until they are completed. In contrast, FPGAs can be configured as parallel processing devices - they can break an algorithm into parts and process them all at once. As a result, FPGAs can perform the same task in fewer clock ticks than a CPU despite having a lower clock speed.

The ability to process a high volume of data or transactions in a shorter period of time is essential for applications where a large volume of data is inherent, such as financial options trading, molecular dynamics, bioscience, and finite impulse response filters, among others. These applications depend on floating-point arithmetic that generates more accurate results than would be obtained using integer calculations.

Developers working in these areas use FPGAs to design and implement application-specific co-processors that help manage the increased need for memory resources and logical operations.

Summary

FPGAs are one of the most valuable development options for embedded systems engineers, as they give complete control over hardware functionality. This allows engineers to design their own application-specific functions and operations that are uniquely suited for specific tasks or products.

Total Phase makes it easier to build FPGA-based products with our Cheetah SPI Host Adapter. Embedded systems engineers can use our Cheetah adapter to load FPGA images into SPI flash memory. Stored FPGA images add configuration flexibility during the debug process for FPGA devices, helping to streamline the testing process and reduce time-to-market.

Learn More About Our Product

Viewing all 822 articles
Browse latest View live