(Source: everything possible/Shutterstock.com)
Analytics is a very general term for correlating and digesting raw data to produce more useful results. Analytics algorithms could be as simple as data reduction or averaging on a stream of sensor readings, or as complex as the most sophisticated artificial intelligence or machine learning (AI/ML) systems. Today, analytics are commonly performed in the cloud because it is the most scalable and cost-effective solution. However, in the future, analytics will be increasingly distributed across the cloud, edge computing, and endpoint devices to take advantage of their improved latency, network bandwidth, security, and reliability. Here, we’ll discuss some of the architectures and tradeoffs associated with distributing analytics beyond the boundaries of the traditional cloud.
Simple analytics involve data reduction, correlation, and averaging, resulting in an output data stream much smaller than the input data. Consider the system that supplies fresh water to a large building. It might be valuable to know the pressures and flows at various points in the system to optimize the pumps and monitor the consumption. This could involve an array of pressure and flow sensors spread around the distribution piping. Software periodically interrogates the sensors, adjusts the pump settings, and creates a consumption report for the building managers. But, the raw readings from the sensors could be misleading—for example, a momentary pressure drop when a fixture is flushed. Analytics algorithms can average the readings from a given sensor over time and combine and correlate the readings from multiple sensors to create a more accurate and useful picture of the conditions in the pipes. All of these readings could be sent to analytics based in the cloud, but it would be a much more efficient architecture if the sensors did some of the averaging themselves, and local edge computers did the correlation and reporting. That’s distributed analytics, and it can improve the efficiency, accuracy, and cost of many analytics systems.
Analytics becomes more complicated when AI/ML techniques are employed. AI/ML usually operates in two phases:
In today’s systems, the models are almost always built in large server farms or the cloud, often as an off-line process. Then, the resulting AI/ML models are packed and shipped to different systems that run the inference phase of the models on live data, generating the desired results. The inference phase can run in the cloud, but recently has been moving toward the edge to improve latency, network bandwidth, reliability, and security. Tradeoffs are worth considering when deciding which level of compute resource to use for each phase.
The inference phase of AI/ML is relatively easy to distribute across multiple peer-level processors or up and down a hierarchy of processing layers. If the models are pre-computed, the data upon which the AI/ML algorithms operate can be split across multiple processors and operated on in parallel. Splitting the workload between multiple peer-level processors provides capacity, performance, and scale advantages because more compute resources can be brought to bear as the workload increases. It can also improve system reliability because adjacent processors are still available to complete the work if one processor fails. Inference can also be split between multiple levels of a hierarchy, perhaps with different parts of the algorithm operating on different levels of the processor. This allows the AI/ML algorithms to be split in logical ways, allowing each level of the hierarchy to perform the most efficient subset of the algorithm. For example, in a video analytics AI/ML system, the intelligence in the camera could perform adaptive contrast enhancement, hand off this data to edge computers to perform feature extraction, send it to neighborhood data centers to perform object recognition, and finally the cloud could perform high-level functions such as threat detection or heat-map generation. This can be a highly efficient partitioning.
The learning phase of AI/ML algorithms is harder to distribute. The problem is context size. To prepare a model, the AI/ML system takes large batches of training data and digests them with various complex learning-phase algorithms to generate a model that is relatively easy to execute in the inference phase. If only a portion of the training data is available on a given compute node, the algorithms will have trouble generalizing the model. That is why training is most often done in the cloud, where memory and storage are virtually unlimited. However, certain scenarios require the training algorithms to be distributed across multiple peer-level compute nodes or up and down the cloud-to-edge hierarchy. In particular, learning at the edge enables the learning process to collect lots of training data from nearby sensors, and act upon it without cloud involvement—which improves latency, reliability, security, and network bandwidth. Advanced distributed-learning algorithms are under development to address these challenges.
AI/ML is an important future capability of nearly all electronic systems. Understanding the options for how the inference and training capabilities of these systems can be partitioned across a hierarchy of compute resources is key to our future success.
The blog was written by Charles Byers and originally published in 2020. Mouser updated the blog in June 2021.
The Renesas Synergy™ AE-CLOUD2 is an Internet of Things (IoT) development kit for prototyping cloud-connected sensor applications that can communicate through Long-Term Evolution (LTE) Category M1 (Cat-M1) or Category NB1 (Cat-NB1) cellular networks as well as Wi-Fi or Ethernet. The development kit features a programmable ARM® based microcontroller and several types of sensors, including a Global Positioning System (GPS) receiver. Arduino-compatible expansion headers allow you to add other types of input/output (I/O) components. Software development is supported by the Renesas e2 studio Integrated Development Environment (IDE) coupled with an automated code-generation tool .The automated code-generation tool creates ready-to-run projects containing the mix of hardware, software, and cloud connectivity required for your application. The board includes a built-in J-Link debugger interface for programming and debugging using a personal computer running Windows.
Google Cloud IoT is a cloud service on the Google Cloud Platform. It allows you to define registered IoT devices along with their device identifications (IDs) and security credentials. This cloud service creates publish and subscribe topics, which allow the secure exchange of messages between IoT devices and the Google Cloud. Other applications and Google Cloud services can subscribe to receive these messages to take further action on the data. IoT devices can also receive commands and configuration parameters by publishing messages to corresponding topics.
The Renesas Synergy Enterprise Cloud Toolbox Demo Dashboard is a quick-start web and cloud application that allows you to quickly get your AE-CLOUD2 kit running and sending sensor data to the cloud where you can view it on a real-time dashboard. Example application projects illustrate end-to-end operation, and you can modify the source code to extend and adapt to your own applications.
To use your own AE-CLOUD2 kit with the Google Cloud, check out our step-by-step article that walks you through the entire process of:
There, we also show you how to observe the published data by subscribing to the message data topic using a Python program. A set of next steps gives you suggestions for how to extend and adapt the application for different IoT prototyping scenarios or to learn more.
The STMicroelectronics STM32L4 Discovery Kit for IoT Node is an Internet of Things (IoT) development kit containing an STM32 ultra-low-power microcontroller, several types of sensors, and several types of wireless communications interfaces, including Wi-Fi, Bluetooth, and Sub-GHz (868/915MHz). It’s ideally suited for prototyping sensor applications that connect to the cloud. Various integrated development environments (IDEs) can support software development. Libraries are available for implementing functions such as board initialization, sensor input/output (I/O), Transmission Control Protocol/Internet Protocol (TCP/IP) networking, cloud communication protocols, and security processing. The board contains a built-in ST-LINK interface for programming and debugging using a personal computer running Windows, Mac OS/X, or Linux. The embedded Wi-Fi module allows the board to connect to an 802.11b/g/n Wi-Fi access point for Internet connectivity.
The STMicroelectronics X-CUBE-GCP software package provides the essential software to get the STM32L4 Discovery Kit for IoT Node working with the Google Cloud. This software package contains several components and libraries including microcontroller initialization, board sensor I/O drivers, real-time operating system (RTOS), MQTT and TLS libraries, and a TCP/IP networking stack. An example program illustrates an end-to-end operation, and you can modify the source code to extend and adapt to your own applications. The software is downloadable for free from the STMicroelectronics website.
To use your own STM32L4 Discovery Kit for IoT Node with the Google Cloud, check out our step-by-step Integrating the STM32L4 Discovery Kit IoT Node with Google Cloud Platform Using Wi-Fi article that walks you through the entire process of:
There, we also show you how to observe the published data by subscribing to the message-data topic using a Python program. A set of next steps gives you suggestions for how to extend and adapt the application for different IoT prototyping scenarios or to learn more.
The Microchip Technology AC164160 AVR-IoT WG Evaluation Board is an Internet of Things (IoT) development kit containing an ATmega4808 low-power microcontroller, ATECC608A cryptographic coprocessor, Wi-Fi module, on-board sensors, and expansion bus for connecting a growing portfolio of modular sensor and actuator add-ons. Several Integrated Development Environment (IDE) options coupled with an automated code-generation tool support software development. The automated code-generation tool creates ready-to-run projects containing the exact mix of hardware, software, and cloud connectivity needed for your application. The board includes a built-in Nano Embedded Debugger interface for programming and debugging using a personal computer running Windows, Mac OS, or Linux. The embedded Wi-Fi module allows the board to connect to an 802.11 b/g/n Wi-Fi access point for Internet connectivity. The cryptographic coprocessor provides secure hardware-based storage of private keys along with accelerated cryptographic operations that are much faster than performing them in software.
The Atmel START web-based tool provides the required software to get the AC164160 AVR-IoT WG Evaluation Board working with the Google Cloud. The web-based tool contains several components and libraries including microcontroller initialization, board sensor I/O drivers, interrupt-driven operating environment, MQTT and TLS libraries, and interfaces with the cryptographic coprocessor. Example programs illustrate end-to-end operation, and you can modify the source code to extend and adapt for your own applications.
To use your own AVR-IoT WG board with the Google Cloud, check out our step-by-step Connecting Google Cloud IoT and AVR-IoT WG Eval Board article that walks you through the entire process of:
(Source: Generative ART- stock.adobe.com)
The COVID-19 pandemic disrupted routine business operations in several ways, but perhaps the most memorable was the strain it placed on supply chains for a number of consumer commodities. An efficient transportation management system (TMS), which helps plan and execute the shipment of goods, would not have prevented the Great Toilet Paper Shortage of 2020, but it might have made the problems more traceable and, therefore, more transparent.
While traditional TMS systems have been housed on-premise, there's increasing momentum to move its operations to the cloud. The speed of business demands agility, which cloud-based TMS delivered as software-as-a-service (SaaS) packages usually provide.
A TMS is a significant, but only one, component of a supply chain management (SCM) system. The other two pillars include enterprise resource planning (ERP) and warehouse management systems (WMS). The three together ensure all workflow tasks related to orders and inventory are managed smoothly.
The ERP, for example, takes care of order and inventory management and accounting and invoicing protocols. Once orders come in, the WMS attends to fulfilling them and also manages the flow of other goods in the warehouse. Finally, the TMS helps pick the suitable carrier and delivery method for optimal cost and speed of shipment. While the process might sound simple, the choices in each step might involve hundreds of options, especially in large enterprises. So, many business considerations go into choosing and operating a TMS.
The global supply chain has been under intense pressure to ensure smooth operations and facilitate commerce without disruptions. Fortunately, aided by technology, many factors are merging to make every link in the supply chain more transparent.
Blame it on the Amazon effect. Consumers want their packages faster and often want to be able to track them in real time. The trend is also migrating to business-to-business transactions. But such efficiencies in the last mile of shipping need more relevant information that can be easily shared and visualized.
The Internet of Things (IoT) and its industrial equivalent have enabled sensors for more efficiencies in transport management systems and the larger supply chain. Sensors can measure and report the temperature of foods and vaccines, for example, in cold supply chains. Computer vision and artificial intelligence process information from barcodes more precisely and deliver sharper insights and real-time tracking.
Both shippers and other stakeholders up and down the supply chain, especially in the last mile, need to access updated and real-time information wherever they are conducting business, whether on the road or elsewhere.
To avoid, or at least lessen the impact of, of large-scale disruptions, we will need both large and small changes throughout. The TMS will also need to be able to process and display the continual stream of insights from advanced technologies.
The many process improvements enabled by technology need a modern TMS to best leverage the advantages. The reasons to switch to a cloud-based software system for transportation management are very similar to why one would switch to the cloud for most other software.
Cloud-based SaaS is usually able to grant authorized users access on the road and through proprietary mobile apps. Such ready availability improves worker productivity as they don't need to be at one particular location to gain insights into the global movement of all packages.
Accessing information on the road also enables TMS users to be continually connected with all relevant decision-makers—from carriers and suppliers to customers. Users can also rest assured that they are working with the latest software version that garners insights from advanced technologies and delivers them in one integrated platform. A cloud-based TMS will also ensure the latest security updates are integrated into the workflows.
One of the strongest advantages of a cloud-based SaaS is its ability to scale up and down as needed with business needs. Too often, on-premise IT infrastructure gets outdated quickly, and the care and maintenance of the equipment and related software adds to already steep capital expenditures. Cloud-based TMS enables enterprises to use as many software licenses as needed and adds to operating expenses instead of bulking CAPEX.
The global supply chain has seen many hiccups in recent years, which often adversely affect consumers. When there are thousands of carriers and packages to ship and track, enterprises need cutting-edge TMS that will help them save time and money while keeping customers happy. The TMS might be only one component of a larger supply chain management system, but a cloud-based one helps companies tackle the many complexities smoothly while leveraging the latest advanced technologies for continual process improvements.
We’ve all spent time, maybe too much time, in clinic waiting rooms or in hospital wards watching monitors and checking the beeps of machines showing healthcare technology in action.
With every visit, you’ll notice a leap in the deployment of this technology. It’s because the healthcare industry is seeing a tech boom, particularly when it comes to leveraging cloud computing to develop on-demand, self-service online infrastructures for more effective patient care.
It’s not so much a new technology as it’s a new model for delivering computing resources.
Cloud computing got an unexpected boost from protocols deployed during the COVID-19 pandemic. To minimize contact, the healthcare industry designed more products that use remote services in place of local servers or networks to store, manage, and process data.
Cloud computing allows healthcare providers to easily share real-time data along unlimited distances, eliminating delays in patient treatment. This emerging technology also adds capabilities such as mobility, collaboration with patients and peers, easy archiving of electronic records, streamlined collaboration, the ability to access and deploy high-powered analytics, telemedicine capabilities, and more.
In this week’s New Tech Tuesdays, we’ll look at new products from Micron, Xilinx, and Amphenol built for cloud computing solutions.
You’ll want speed and performance with a solid-state drive, especially when it comes to getting high performance with minimal power for all-day use on ultrathin notebooks or professional workstations. The Micron 3400 Solid State Drive (SSD) with NVMe™ is an industry first 176-layer mass-production NAND dense storage solution. The Micron 3400 SSD has twice the read throughput and up to 85 percent higher write throughput than prior generation SSDs with NVMe, which allows applications such as real-time 3D rendering, computer-aided design, and animation.
If your design needs true network convergence, then look no further than the Xilinx® Alveo™ SN1000 SmartNIC Accelerator Card. The SN1000 SmartNIC combines network connectivity, computer, and storage acceleration into a single solution. Offered in a single slot, half-length, full-height form factor, the card enables maximum CPU savings of cloud services by offloading infrastructure workloads to the SmartNIC allowing deployment of bare-metal services. The SN1000 integrates an XCU26 FPGA and an NXP Semiconductors Layerscape Processor that features 16 Arm® v8 Cortex®-A72 cores. The SN1000 has two QSFP28 network connections and an x16 PCI Express® Gen 3/Gen 4 x8 interface connected to the XCU26.
Amphenol FCI Millipacs® High Speed Right Angle Receptacles are 2.0mm modular, board-to-board, and cable-to-board interconnection systems. The hard-metric (HM) series are used extensively for applications that require data rates of up to 3Gbps. Also offered are high-speed (HS) right angel receptacles well-suited for medical, data, industrial, instrumentation, and communication applications. For medical use, they’re ideal for magnetic resonance imaging scanners and diagnostic equipment where the right angle receptacles retain mating compatibility with existing backplane architecture, making it a cost-effective upgrade. They are also compatible with HM backplane headers and offer lower cross talk at higher frequency performance. The receptacles are also available in a five-row version with horizontal pinning assignments that provides up to 24 differential pairs (DP) for Type A or 30 DP for Type AB per 50mm of standard module length.
COVID-19 prompted a need for immediate change in the medical industry. Meeting social distance requirements while still delivering quality patient care proved a real challenge. Cloud computing provided the answer. In this week's device selection, we focused on the design needs of cloud applications, such as speed, power, and connectivity. Cloud computing allows healthcare providers to share real-time data and eliminate delays in patient care. Besides adding mobility and enhancing collaboration with patients and peers, cloud computing ensures easy archiving of electronic records, high-powered analytics, and telemedicine capabilities.
Ver versión para móviles
Centro de privacidad |
términos y condiciones
Mapa del sitio
Derechos reservados ©2024 Mouser Electronics, Inc.
Mouser® y Mouser Electronics® son marcas de Mouser Electronics, Inc. en los EE.UU. y/o en otros países.
Todas las demás marcas son propiedad de sus respectivos dueños.
Sede corporativa y el centro de logística en Mansfield, Texas USA.