I recently received a notebook as a gift at work. This notebook was cooler than most. On the outside, it had the company logo and was bound in leather (or faux leather). So far, so good.
When I opened it up, it provided a real surprise. There wasn't the traditionally regular-lined paper, nor were any clean, white blank sheets staring back at me. Rather, it was the grid graph paper featuring small squares everywhere.
Receiving this notebook made me flash back several decades ago to recording my high school physics experiments (in a similar but not as nicely bound notebook). In physics class, my classmates and I would be called upon to emulate famous past experiments. One we were asked to reproduce was the Italian polymath Galileo Galilei’s (1564–1642) inclined-plane experiment involving steep incline and shallow inclines (Figure 1). His experiment helped to displace Aristotelian conceptions of physics by demonstrating that objects experienced uniform acceleration due to the effects of earth’s gravity.
Figure 1: Illustration of Galileo's inclined-plane experiment involving steep incline and shallow incline. (Source: Mouser)
My notebook was full of measurements outlining masses (grams, g), slope angles (Θ), sin values of the slopes (sinΘ), duration times (seconds, s). Many physics experiments were done like this, including studies of the laws of motion emulating Isaac Newton (1642–1727) and demonstrating things such as F (force) = m (mass) * a (acceleration) or simply: F=ma. The net result after collecting data was usually to assemble the data from tabulations made on the left side of the page and turn them into various Cartesian coordinate graphs and functions on the right side of the open page.
French mathematician René Descartes’ (1596–1650) coordinate system enabled points in three-dimensional space to be uniquely plotted out by a set of numerical coordinates defined by mutually orthogonal axis, called x-axis, y-axis, and z-axis. Algebra could now be easily applied to geometry.
Electronic component technology has advanced a long way in the decades since I found myself learning the basics of physics in the lab. Today, tri-axis accelerometers can easily calculate forces on all three axes simultaneously. Tri-axis accelerometers make collecting XYZ axis information easier than learning your ABCs.
One electronics manufacturer making tri-axis accelerometers easier than learning your ABCs is Kionix. Kionix, a ROHM Semiconductor Group Company, is a manufacturer of silicon (Si) Micro-electromechanical Systems (MEMS) accelerometer products (Figure 2). MEMS accelerometers are microelectromechanical systems that measure the static or dynamic force of acceleration. Kionix makes MEMS accelerometers, including a variety of tri-axis accelerometers.
Figure 2: Kionix, a ROHM Semiconductor Group Company, is a global leader in the design and fabrication of high-performance, silicon-micromachined MEMS inertial sensors. (Source: Kionix)
Kionix introduced the KX003-1077 Tri-axis Accelerometer (Figure 3). The KX003-1077 Tri-axis Accelerometer offers four extended user-configurable g-ranges (±2g, ±4g, ±8g, and ±16g) and three resolution modes (8-bit, 12-bit, and 14-bit). The accelerometer consumes <2µA at its lowest power setting and offers sampling rates from 1Hz to 1600Hz. It delivers lower noise performance, exceptional shock resiliency, stable performance over temperature, and better timing accuracy than their previous generation accelerometers.
Figure 3: The Kionix KX003-1077 Tri-axis Accelerometer with a digital I²C interface and motion detection/wake-up interrupt offers up to 14-bit resolution and user-selectable g-ranges. (Source: Mouser)
Kionix creates mechanical silicon structures, which are essentially mass-spring systems that move in the direction of the applied acceleration. The capacitance accelerometer senses changes in capacitance between microstructures located next to the device. If an accelerative force moves one of these structures, the capacitance will change, and the accelerometer will translate that capacitance to voltage for interpretation. The accelerometer further utilizes common-mode cancellation to decrease errors from process variation, temperature, and environmental stress.
A separate Application Specific Integrated Circuit (ASIC) device packaged with the sensor element handles all of the signal conditioning and digital communications for the KX003-1077 accelerometer. The complete measurement chain is composed of a low-noise capacitance to voltage amplifier, which converts the differential capacitance of the MEMS sensor into an analog voltage that is sent through an Analog-to-Digital Converter (ADC). Users can access the acceleration data through the I2C digital communications provided by the ASIC. In addition, the ASIC contains all of the logic to allow the user to choose data rates, g-ranges, filter settings, and interrupt logic.
The KX003-1077 accelerometer comes in a 2mm x 2mm x 0.9mm Land Grid Array (LGA) plastic package and operates from a 1.7VDC–3.6VDC supply. It uses regulators to maintain constant internal operating voltages over the range of input supply voltages. This results in stable operating characteristics over the range of input supply voltages and virtually undetectable ratiometric error.
The world is a place full of motion where many things are on the move. Sensing motion can be as easy as learning your ABCs. Reflect for a moment, and you'll quickly realize that tri-axis accelerometers from Kionix make it easy to sense motion and movement in XYZ as easy as ABC. Wished the classical dynamics of spinning tops I studied in physics was as straight-forward.
(Source: Analog Devices)
Simple is better. Simplicity allows for fewer things to go wrong. It provides fewer things to figure out from a design standpoint. Additionally, it saves on cost. This blog discusses an alternative and simplified I2C/SPI communication solution when working with power I/O-constrained systems.
Traditionally, I2C and SPI have employed multiple wires. However, it is possible to deliver power and data to operate I2C and SPI endpoints, such as humidity or temperature sensors, using a single wire connection and ground. Specifically, Analog Devices’ 1-Wire® technology offers a robust solution when working with I/O-constrained systems where there might be only one or a few pins available on the host. The Analog Devices DS28E18 1-Wire® to I2C/SPI Bridge is an excellent example of a bridge device that leverages 1-Wire to address a standard set of system challenges like wiring limitations, communication distance, and protocol conversion (Figure 1).
Figure 1: The chart shows challenges associated with wiring limitations, communication distances, and protocol conversion. (Source: Analog Devices)
This one-wire interface technology is offered through Analog Devices and has been around since the 1980s. A single dedicated connection delivers power and data, enabling various applications such as medical sensors, accessory identification, and remote or local environmental sensing. The 1-Wire solution offers the benefits of operating SPI or I2C devices with a single-contact interface, eliminating the need for an external power source, and the flexibility of 1-Wire and I2C/SPI master operational modes for these applications.
Two contacts operate this interface. With the 1-Wire single connection and a ground connection, designers can communicate at two different speeds, 11.7kb/s and 62.5kb/s, in overdrive mode. A microcontroller host attaches to a remote SPI sensor through a 1-Wire interface to the DS28E18 bridge using only two connections, the 1-Wire I/O and ground (Figure 2).
Figure 2: The diagram illustrates the system-level configuration. (Source: Analog Devices)
One of the unique features of the DS28E18 communications bridge is that it can harvest up to 10mAs of current to power up the externally connected I2C/SPI endpoints. This device can also drive the I2C and SPI endpoints up to 1MHz and 2.3MHz. The DS28E18 communications bridge comes in a small 2x3mm TDFN package and operates at 3.3V (±10%) within environmental conditions of -40°C to +85°C.
Besides the 1-Wire interface and capabilities, the DS28E18 encompasses three main blocks (Figure 3) that are essential to interface with the I2C/SPI endpoints:
Figure 3: DS28E18 block diagram showing the three main blocks essential to interface with the I2C/SPI endpoints. (Source: Analog Devices)
The Command Sequencer processes the buffer data and stores it at the specified address in SRAM (128 bytes at a time) and returns a CRC16 for the host processor to validate data transmission. The sequencer minimizes the host’s communication overhead by having the most commonly used commands stored in the SRAM. The DS28E18 provides a 512-byte buffer in SRAM that can be loaded with multiple I2C or SPI commands. Once loaded, the host controller sends an order to execute the sequence, provide power, and collect data from attached I2C or SPI peripherals. A subsequent 1-Wire command reads collected sensor data.
Three types of commands, which reside in the blue highlighted elements (Figure 4), operate this device. These commands are:
Figure 4: The highlighted block diagram illustrates where commands that operate the DS28E18 reside. (Source: Analog Devices)
The host initiates communication to identify and select the DS28E18 bridge device using 1-Wire ROM level function commands. Once selected, device function commands interact with the sequencer. Figure 4 lists the 1-Wire ROM and device function command available for the DS28E18. Refer to the DS28E18 Technical Documentation for detailed information.
The DS28E18 has a 144-byte command buffer that utilizes 16-bytes for device function command operations and 128-bytes to transfer formed packets with sequential commands into a 512-byte SRAM sequencer. The formed packets installed in the SRAM sequencer can get called to write and read I2C/SPI data to attached slaves. The maximum length of a sequence is 512-bytes. The I2C/SPI slave response is recovered using a Read sequencer command upon completion of a sequence.
The result byte returned indicates success or any error encountered, such as receiving a NACK. If the byte indicates an error, two additional bytes are returned indicating the error position in the sequence.
The sequencer’s utility commands provide various functions such as delays and power gating to an endpoint device via the SENS_VDD pin. The delay can be employed in a sequence to allow additional time for an I2C/SPI endpoint device to perform a conversion or allow for settling after power is applied to the endpoint. The delay ranges from 1ms to 32s. The power provided to the endpoint is harvested from the 1-Wire interface. This means that the host must enable a strong pullup for the entirety of the sequence. The DS28E18 can deliver up to 10mA of current.
The DS28E18's GPIOs, I2C, and SPI interfaces multiplex across four pins (Figure 5). The I2C interface can operate at 100kHz, 400kHz, or 1MHz, while the SPI can be configured to operate at 100kHz, 400kHz, 1MHz, or 2.3MHz. The GPIOs are not available when configured as SPI.
Figure 5: GPIO/I2C/SPI Pin Multiplexing and Interface Control (Source: Analog Devices)
To get hands-on experience, order the Analog Devices DS28E18EVKIT Evaluation System.
Marco Antonio Ramirez Castro
Marco A. Ramirez Castro is a level entry Product Applications Engineer at Analog Devices. Graduated with a bachelor’s degree in Electrical Engineering from the University of Texas at El Paso, Marco has had the opportunity to not only develop himself professionally by attaining multiple internships with different Fortune 500 companies, but also on a more personal level by joining the recognized MAES/SHPE Latino organization as a board officer and Vice President of UTEP’s Chapter where he learned important leadership skills leading to various awards such as SHPE’s regional chapter of the year, SHPE’s Chapter Excellence award, and MAES/SHPE Chapter’s Community Service award. With his goals in mind, Marco plans to keep growing in this industry and is excited to learn and be part of this community of amazing engineers with the hopes of becoming a great one himself.
Marco Antonio Ramirez Castro authored the Just 1-Wire to Operate I²C/SPI Endpoints blog, which is repurposed here with permission.
As humans, we are blessed with extraordinary biological sensors, such as our eyes and ears, coupled with an incredible processor in the form of our brain. Those who create machine-vision systems began by trying to replicate our human abilities using imaging sensors operating in the visual spectrum coupled with artificial intelligence (AI) and machine-learning (ML) technologies to provide object detection and recognition capabilities. The proficiencies of these systems can be further enhanced by employing dual sensors to provide binocular vision and depth perception.
The problem is that, as wonderful as traditional machine-vision systems are, they suffer from the same problems as the human eye, such as being limited to the visual spectrum and operating poorly in low-light and inclement weather conditions, such as rain, snow, and fog. Imagine the possibilities if these machine-vision systems could overcome these limitations. Here, we will explore the challenges associated with conventional imaging systems, as well as a solution for future imaging applications such as people tracking, volumetric measurement, robotics, and more.
A downside of conventional and thermal sensors is that they aren’t tremendously effective with regard to determining distance and tracking multiple objects in motion as they pass in front or behind each other. One option to overcome this limitation is to augment conventional and thermal imaging sensors with one or more light detection and ranging—also known as laser imaging, detection, and ranging—(LiDAR) sensors.
Conventional imaging systems are classed as being passive on the basis that they detect whatever electromagnetic energy, such as visible light or infrared, comes their way from the outside world. By comparison, LiDAR is categorized as an active remote sensing system because it generates light using a rapidly firing laser. A LiDAR system measures the time it takes for the emitted light to travel to any objects in front of it and to come back again. These times are used to calculate the distances traveled.
In much the same way that a standard imaging system creates a 2D-array of pixels (picture elements), a LiDAR imaging system creates a 3D-array of voxels (volume elements). The narrow laser beam employed by the LiDAR can detect and map physical features with very high resolutions. In fact, LiDAR dramatically outperforms standard stereo-depth cameras in applications where high resolution and high-accuracy depth data is required.
Depending on the target application, designers can use AI/ML systems in conjunction with various combinations of sensors including:
Let’s take a look at a possible use case. Consider the COVID-19 pandemic. One symptom of someone infected with the coronavirus is an elevated temperature. Designers could augment a conventional machine-vision system with thermal and LiDAR sensors to detect potential carriers in an environment such as the travelers’ lounge in an airport.
Intel’s RealSense technologies offer a wide variety of vision-based solutions designed to give your designs the ability to understand the world in 3D. The latest addition to the family is the Intel® RealSense™ LiDAR Camera L515 (Figure 1), which has the bragging rights of being the world’s smallest—61mm in diameter, 26mm in depth—and most power-efficient high-resolution LiDAR that’s capable of capturing tens of millions of voxels per second.
Figure 1: The Intel® RealSense™ LiDAR Camera L515 has a diameter smaller than a tennis ball. (Source: Intel)
Based on a revolutionary solid-state LiDAR depth technology designed for indoor applications, the L515 is perfect for applications that require depth data at high resolution and high accuracy. With a range of 0.25 meters to 9 meters, the L515 provides over 23 million accurate voxels per second, with a depth resolution of 1024 x 768 at 30 frames per second (fps). For applications requiring the combination of traditional machine vision and LiDAR, the L515 also features a full high-definition (FHD) RGB video camera sensor, along with additional sensors such as a MEMS accelerometer and a MEMS gyroscope (Figure 2).
Figure 2: Exploded view of the Intel® RealSense™ LiDAR Camera L515 (Source: Intel)
Furthermore, the L515 boasts an internal vision processor that performs tasks such as motion blur artifact reduction, thereby offloading such duties from the host processor. The lightweight L515 consumes less than 3.5 watts of power, making it the world’s most power-efficient high-resolution LiDAR camera on the market. The combination of small size and low-power consumption makes the L515 ideal for use in handheld products and small autonomous-robot applications.
If you are interested in taking advantage of the L515 in your own designs, Intel’s open-source RealSense software development kit (SDK) 2.0 is both cross-platform and operating system independent. In addition to Windows, Linux, and Android, you can also install the SDK 2.0 on Jetson TX2, Raspberry Pi 3, and macOS platforms.
The L515 uses the same SDK as all other current-generation RealSense Technology family devices, thereby allowing an easy transition from any of Intel’s other 3D cameras. The idea here is to develop once, and to then deploy on any current or future Intel RealSense depth device. Who among us could argue with a philosophy like that?
Potential applications for the L515 can get any designer’s head buzzing. LiDAR has traditionally been associated with autonomous vehicles and other outdoor applications, but the L515 opens the floodgates to all sorts of possibilities, including people tracking, volumetric measurement, robotics, 3D scanning, and the list goes on. By pairing thermal imaging technologies with LiDar technology, designers can overcome limitations commonly associated with conventional imaging systems.
How about you? What types of systems can you envisage deploying with the Intel® RealSense™ LiDAR Camera L515?
Most engineers cringe when they see an Arduino in my toolbox because it's often seen as being too easy to use or not feasible. For the most part, they are correct, but that's not what I'm here to debate about. What these people don't realize is the powerful “shape shifting” tool that this low cost development board is. Here are three commonly overlooked uses for an Arduino:
Open Source Logic Sniffer (OLS) is a simple software tool that implements features of a digital logic analyzer (Figure 1). The OLS client is Java based, which allows it to run happily on most operating systems. Due to its simple serial protocol, many open source tools like Bus Pirate, Logic Pirate, and of course Arduino have basic support for OLS. With zero external components (wires not included), and Andrew Gillham's open source code, you can program your Arduino UNO to become a digital logic analyzer.
Figure 1: Open Source Logic Sniffer is a simple software tool that implements features of a digital logic analyzer.
Here's a list of some of the features available to you when using an ATmega328based Arduino:
It might not have blazing specs, but sometimes it's just enough to get you by in a squeeze. I often find myself using either the Arduino or FPGA implementation to verify proper communication protocol while bit-banging code.
Like many college students, money is often a deal breaker for most of my decisions. In this particular case I am referring to the overhead cost of purchasing an in-system programmer for micro controller design. Perhaps you want to make one of your projects permanent on a PCB, or are just curious to see how to “manually” program an AVR. Whatever the case maybe, this amazing implementation of the Arduino, in my opinion, takes the cake.
The process is quite simple. In fact the sketch is now included with all new versions of the Arduino IDE. To begin:
Once completed, you can now use the Arduino pins 10, 11, 12, and 13 as RESET, MOSI, MISO, and SCK accordingly to program your target AVR device. The only thing left to do is to add these flags to your makefile or avrdude command line:
-P -c avrisp -b 19200
I know it seems kind of obvious, but the Arduino has a built in FTDI USART-to-USB chip. For those Arduino products that have through hole style MCU’s such as the Arduino UNO R3, If you carefully pry out the ATmega DIP chip from its socket, you can free up the serial pins (RX and TX) for other cool uses. I find myself doing this a lot since I like to program informative menus into my micro controller programs. Sometimes a simple interface to let you change the mode or request data of something at runtime can help save you hours of debugging time. Figure 2 is an excellent example of one of these menus that I made while designing a bus-tracking system for my campus bus route.
Figure 2: Custom menus for a bus-tracking system.
There you have it. Who knew the Arduino could be as versatile as a Swiss army knife? I hope I have encouraged you to dust off your Arduino and begin exploring new areas of the wonderful world of electronics. If you made something cool using one of these tools, I'd love to hear about it in the comments.
(Source: Flystock/Shutterstock.com (left) www.ebike-mtb.com (right))
May is National Bike Month, which promotes the many benefits of bicycling, and showcases the evolution of bike tech and encourages more folks to give biking a try.
While many prefer to stick to the paved surfaces when cycling, others see their bike as an escape vehicle and a way to connect with the wilds of nature. Here is where dirt and advanced technology have no choice but to become one. And one of its testing platforms and latest area of advancement is with Electric Mountain Bikes (eMTBs).
What is it about bikes and our desire to constantly want to reinvent the wheel, so to speak? Let's take a look at the technology behind creating a stronger bond with the great outdoors. As you'll see, engineering is at the hub of new development and cycling innovation—changing the way people are pedaling over the mountains and through the woods.
Tesla's Battery Day marks one of the most highly anticipated moments on the tech calendar each year. At this event, huge advances in battery technology are unveiled in a pursuit to manufacture electric vehicles that are more powerful, longer lasting with greater range and less expensive. It all comes down to being able to fulfill Elon Musk's vision for Tesla of selling a fully electric car for $25,000 USD within three years.
But how does this technology trickle down to eMTBs, and how will it impact their development going forward? Let's start off by looking at the batteries eMTBs use now.
Currently, most eMTB batteries are 18650 Lithium-ion (Li-ion). Li-ion batteries were first developed in 1985 and they have been the main driving force behind electric vehicle development as they are rechargeable and you can get a lot of power from them without them taking up too much space. Measuring 18mm x 65mm, about the size of your finger, the batteries are welded together in packs, connected in parallel. Remove the plastic shell, basically it looks a bit like a bunch of AA batteries all joined together.
The all-important number with batteries is watt-hours. This, like you might think, measures how many watts can be delivered in an hour. For example, a 250 watt-hour (Wh) battery could drive a 250W motor on full power for one hour while a 500Wh battery could drive it for two. Most e-bike batteries sit between 300 and 550 watt-hours. The end goal with batteries is getting as many watt-hours as possible. But if you just keep adding cells, you'll start adding weight and volume.
Looking at the positives, manufacturers aren't content to stand still. There is still plenty of innovation still transpiring in this size of cell. In fact, Bosch introduced its 625Wh Powertube battery last year, offering a huge range despite the smaller cell size. The internals of these cells are still being perfected too with engineers experimenting with different cathode and anode materials to boost capacity.
That's not all. One of the other big problems of increasing the volume of a cell is that it makes it harder for heat to escape. If a battery gets too hot, it has to operate at reduced power or risk damaging itself permanently. eMTBs are often out in the sun all day, so better heat management of an 18650 cell could lead to better performance over a longer period of time.
For those of you out there that follow Tesla, it should come as no surprise that Tesla doesn't currently use 18650 cells. Rather, they currently use the 21700 standard with cells measuring 21mm x 70mm. Tesla developed this cell with Panasonic in 2017, and its larger volume means it can be packed with more anodes and cathodes to hold more energy. Tesla also claims it has a longer lifespan and requires less charging.
Recently, riders are starting to see these cells come into eMTBs with the Specialized Turbo Levo. This bike has a massive 700Wh battery providing a generous amount of ride time, making it one of the biggest you can get on an eMTB today. Additionally, its motor delivers up to 565 watts of power and 90Nm of peak torque. Overall, the Specialized Turbo Levo stands at the pinnacle in the industry for its performance and technology.
However, Tesla being Tesla, they haven't stopped with the 21700. Their recent announcement at Battery Day shared the next evolution of its cell technology. The Tesla battery has gone up in size again, this time far more significantly to 4680 or 46mm x 80mm. According to Drew Baglino, Sr. VP of powertrain and engineering at Tesla, this new innovation boosts the energy by five times, ups the power by six times, and increases the range of a car using these batteries by 16 percent in relation to the 21700.
The advantages for eMTBs of a more powerful battery are clear. You can either deliver the same power in a smaller, lighter package, which makes the bike handle and look more like a traditional mountain bike, or you can keep the battery the same size and boost the range of the bike. But before you get too excited, remember that it took roughly two years for the 21700 tech to trickle down from Tesla Model 3 cars to the Specialized Turbo Levo. And like all new technology innovations, expect the costs to be steep (new technology is rarely cheap) just like the mountains its designed to help climb.
A common complaint engineers often hear is weight, eMTBs are too heavy. However, eMTBs have to withstand up to 50% higher loads on the trail than regular mountain bikes. As a result, engineers have to factor in braking loads, especially on the fork. Brakes also need to incorporate bigger rotors and feature four-piston brake calipers to provide enough stopping power. Suspension kinematics, front and rear, along with the bearings and linkages have to be rethought and designed for strength and durability. Individual components like seat post and handlebars are often pushed to their literal breaking point. All of this quickly adds up.
Using current industry data, the maximum permissible weight with an active riding style reaches its limit at 150kg. To go beyond that, engineers would have to develop frames and components that are sturdier and more robust, designing them differently than standard mountain bikes. This would make the bikes much heavier and riders would have to be content with eMTBs that weigh 27–29kg. The question, or the proverbial elephant left standing in the room, is… who really wants to ride such a heavy bike?
The frame material of choice for over a decade has been carbon fiber. It's lightweight and can be sculpted and manipulated to increase the frame's stiffness and flexibility or compliance for a better ride quality. The biggest downside to carbon has been its reputation for being brittle, and how an unblemished but weakened frame can suddenly fail catastrophically at unexpected moments.
Continual engineering and experience working with carbon have lessen this to some degree. However, several new approaches using hybrid composites show promise of increasing safety with minimal or no decrements in the features that make carbon fiber so highly valued in the bicycle industry. Hybrid composites blend carbon fiber bonding strands (called “plies”) with another material such as plastic or steel in the same ply. Innegra Technologies, Dyneema and REIN4CED are a few companies blazing the trail for others to follow.
The automobile industry has Ehra-Leissen, Nürburgring or Millbrook Proving Ground to put their new vehicle technologies to the ultimate test. Recently, I came across a 2021 eMTB test featured on YouTube. In the rugged, rock-strewn desert of St. George, Utah, twelve different manufacturer's eMTBs are tested and tortured by a panel of professional riders. Check out how far eMTBs have come and where they still need to go in the future. But one thing is for certain: this growing genre of mountain bikes continues to evolve and get even better each year.
Although not everyone that will read this blog is a cyclist, thanks to National Bike Month, we can all marvel at the bicycle industry's advances in technology, especially when engineers decide to venture down the path less traveled. And that has me and Mother Nature smiling. Now go out and fling some mud in the name of technology.
Learning electronics is a lot like peeling an onion. No matter how much you learn, there is always a deeper level of knowledge that can be uncovered and explored. It’s not just that new technologies are constantly being invented, it’s also that many components have become so mature that we simply take them for granted. Resistors and capacitors are perfect examples of this point. For the first few years of my college education, resistors and capacitors were, for me, simply defined by their schematic symbols. It wasn't until I got into the labs that they become tangible bits of metal, plastic, and various other materials. Still, in my then-inexperienced minds’ eye, a resistor was a resistor. A capacitor was, at most, polarized or non-polarized. But if we peel back the onion, there is a lot more there than meets the eye.
When you get to the point of designing your first project from scratch, after you’ve laid out the circuit it’s time to build the Bill of Materials (BOM). Naturally, you would go to mouser.com and start entering part names or numbers into the search bar. While you might be prepared to select the proper resistance value and/or power rating, what is probably less clear is the differences between the types of resistors and capacitors. Let’s take a look….
The number of choices is enough to make your head explode (Figure 1)! Obviously, the various resistor and capacitor types exist for good reasons. The various construction techniques yield different performance characteristics. Many are useful only in very high-end, specialized applications, and unit costs can range from a few pennies to over a hundred dollars for certain high power resistors. So what should most makers be looking for when selecting resistors and capacitors for their next project?
Figure 1: So many types of resistors! Where to start?
Depending on where you are in developing your soldering skills, the first thing you have to decide is your preference for through hole or surface mounted (SMD) components. Through hole components are much easier to hand solder but at the expense of taking up much more real estate on your printed circuit board (PCB).
Carbon resistors are the perhaps the most common resistors that you will use initially. They are included in many starter kits. They tend to be very cheap but also electrically “noisy,” especially as they get warm. For applications that don’t require tight tolerances, such as limiting current to LEDs, carbon resistors are just fine.
If you are willing to trade off cost, you can get metal film resistors that exhibit less noise and better stability as temperatures increase. This makes them better for high frequency applications such as RF circuits. In addition, very large resistor values are possible (measured in mega-ohms).
These resistors also have relatively higher unit costs. Their resistance values are only common up to a few kilo-ohms. However, they tend to be able to handle a lot of current and they are very precise, making them common in sensing applications.
These resistors offer a pretty good compromise between low cost carbon and higher performance film (smaller tolerance, more precision) and wirewound resistors (greater power handling ability). I would consider these the default type for most applications unless I know for sure another type is absolutely needed or if a cheaper variety will suffice.
Slightly less performance than thin film resistors but also tend to be less expensive.
Just like resistors, capacitors are also manufactured in a variety of ways, yielding different performance characteristics and costs. For the most part, the variation between capacitors is based on the techniques and materials used in their construction, chiefly the dielectric that separates the two plates of a capacitor.
These capacitors often take the form of the blue and silver vertical cylinders (though not always) that populated many a circuit board. They are very inexpensive, have very high capacitance per unit volume, and can handle high voltages. On the flip side, they are polarized, suffer from relatively high leakage currents, and have an Equivalent Series Resistance (ESR) that increases with frequency. In addition, the ESR in aluminum electrolytics gets worse with time. When aluminum electrolytics fail, an open circuit results. They are often used in the voltage regulation portion of a circuit.
These capacitors are also polarized and tend to be a bit more expensive than aluminum electrolytics. For that extra cost, tantalum caps exhibit lower leak currents and great stability of the capacitance value. Relative to aluminum electrolytics, they can’t handle the higher voltages but they do hold up better to a reverse voltage. Their best benefit is their size vs capacitance and they are typically used when circuit board real estate is at a premium. When they fail, they result in a short circuit.
The most widely used type of capacitor, ceramic disk capacitors are often those little orange colored circles (again, they do exist in other form factors and colors however) that are typically spread pretty liberally across many circuit boards. Often, they are used across the power and ground pins of an integrated circuit, which helps cut down on problems associated with voltage drops that can cause a chip to reset. They are also used in high speed signal coupling and decoupling applications. Ceramic disk caps typically range in capacitance values from a few picofarads to a few microfarads. They also have pretty low voltage ratings. When they fail, they result in a short circuit.
Film capacitors could warrant an entire vlog onto themselves given the wide variety of dielectric materials and construction techniques used for them. Typically more expensive than other capacitor types, but their parasitic losses are low and find use in applications with high currents. In addition, their tight tolerances make them appealing to use in in timing applications, such as motor speed controllers. When they fail they result in an open circuit, however they have life expectancies rated in decades.
Hopefully, you won’t look at capacitors and resistors the same again! This discussion was not meant to be an exhaustive review, but rather a collection of some of the more important rules of thumb that have been passed down to me over the years. There are lots of other nuances, so be sure to do your homework if this article has piqued your curiosity. Also, keep in mind that buying in bulk can help reduce unit cost. So if you need higher performance, it can be made more economical if you are manufacturing a product in bulk.
One last tip: When prototyping, I much prefer through hole components to keep things simple. If I intend to homebrew a circuit board with surface mount components I try to stick to the following packages: 0603, 0805, 1210, and 2010 in a thin or thick film variety.
Your turn, what are your experiences with using different types of caps and resistors? Let us know in the comments below.
Frontiers are meant to be explored, and space—it is said—is the final frontier. In many ways, the exploration and long-term settlement of our solar system will be entirely different from the European colonization of the Americas or the later western expansion of the United States. The extreme environment of space, the vast distances, lack of indigenous resources, and the mere act of leaving Earth’s surface will present challenges unique to our “star-ward” journeys.
Despite these new complications, we can still look back to America’s westward expansion of the 1800s to find some ideas that might help our next generation of explorers. For example, yesterday's pioneers relied heavily on forts and outposts to offer refuge from the challenges they faced. The adventurers of the 21st century might need to rely on something similar—a space station.
Humanity has already launched and sustained quite a few space stations, though all have remained tightly in the grasp of Earth’s gravity. The Soviet Union had the Almaz and Salyut space station programs, and later the Mir space station. The United States’ first space station was Skylab, which remained in orbit from 1973 to 1979.
Fast forward to today, and we have the International Space Station (ISS) whizzing around the Earth at 27,600km/h (that comes out to about 17,100mph). A crew of six multinational astronauts calls the ISS home on any given day. China launched their first space station, the Tiangong-1, in 2011, but they have not sent any new crew members to the station since 2013.
Many innovations have been tested out on the ISS including inflatable habitation modules, robotic crew members, autonomous drones, and zero gravity 3D printers, to name a few. These tools might become more commonplace on next-generation space stations. Russia, China, and the United States have all announced planning efforts for followups to both the ISS and Tiangong-1. Cooperation between these three countries remains tentative at best, but all plans look to a mid-2020 timeframe for the launch of the new space stations. Meanwhile, the European Space Agency has been actively seeking plans for a lunar colony as well.
There is no doubt that the next-generation space station will be technologically more forward regardless of what path is taken. However, what can we expect from the space stations that will exist in 100 years? Or 200 years?
To answer that question, we might consider closing our history textbook and turning on some science fiction.
Babylon 5, Deep Space Nine., and the Death Star; arguably these are three of the most well-known space stations of science fiction lore. There have also been many takes on the ring-like space station such as Space Station 5 from Stanley Kubrick's masterpiece 2001: A Space Odyssey, and the more recent Elysium. If you are looking for a few laughs, there is the Satellite of Love made famous by the comedy show Mystery Science Theater 3000. But for now, let’s focus on the big three.
So how do these celestial outposts compare?
Size: 120km (diameter)
Personnel: Approximately 2.5 million
Mission: Military weapons system
The sheer size of the Death Star from Star Wars makes it both awe-inspiring and also highly improbable that we would ever attempt to actually build a version of it. Some folks have tried to estimate the costs of such an endeavor and they range from a few a hundred quadrillion dollars to a few dozen sextillion dollars. A sextillion is a one followed by 21 zeros. Sort of makes the US.deficit seem rather insignificant in comparison.
From a technical perspective, building the super laser would probably be the greatest challenge. Generating and storing the amount of energy needed to destroy a planet in a single blast of deadly flaming green radiation would require a significant engineering effort. Perhaps our future space stations would benefit from less powerful but still potent directed energy weapons to destroy debris that posed a potential impact danger.
Another key feature of the Death Star is the tractor beam, much to the chagrin of intergalactic smugglers. Tractor beams allow for controlled and precise movement of inbound spacecraft carrying supplies and personnel to the Death Star. After all, nothing ruins your day like a runaway space shuttle colliding with your inhabited space station. Might we employ such technology on our future space stations?
Size: 8km (length)
Personnel: Approximately 250,000
Mission: Diplomatic hub
While the Death Star is an instrument of war, the Babylon 5 space station is a tool for peace. It also has a diminutive stature compared to the Death Star. The design of Babylon 5 was inspired by physicist Gerard K. O'Neill’s vision for space stations, a concept known now as O’Neill cylinders. An enormous motor built using electromagnetic bearings generates a rotation of the central cylinder, which results in artificial gravity for the station inhabitants. Artificial gravity might very well be necessary for future explorers so as not to be afflicted by health issues that arise from prolonged exposure to zero gravity. NASA astronaut Scott Kelly recently returned from a year aboard the ISS and noted some minor but still significant medical challenges resulting from the absence of Earth’s gravitational pull on his body.
The design of Babylon 5 is surprisingly well thought-out from an engineering perspective. It is divided into six sectors, each with their own functionality. The Blue Sector contains facilities for command and control, administration, medical care, and docking bays. The Red Sector provides the housing accommodations and recreational facilities. Food production and environmental systems are controlled from the Green Sector. Mechanical systems are housed in the Grey Sector, while the station’s power plant is located in the Yellow Sector. Lastly, the Brown Sector contains facilities for transients as well as for manufacturing, maintenance, and waste handling. In short, it is a small, self-contained city. Future engineers might very well look to this type of segmentation in designing larger, more long-term space-based habitats.
Babylon 5 was built to serve as a neutral location for a variety of species to interact. Thus, the environmental control systems of the station must be dynamic enough to meet the needs of aliens with a variety of life-sustaining atmospheric needs. A monorail train runs the length of the rotational axis of the station by taking advantage of the fact that an O’Neill cylinder has zero gravity along that axis. Babylon 5 also features a double hull to give itself protection from any damage that could be caused by impacts from meteorites and other small debris. While we might not have a need for mass transit in our future space stations, protection from debris and efficient environmental control systems are absolute must-haves.
Deep Space Nine at a Glance:
Size: 1451.82m (diameter)
Personnel: 300 permanent inhabitants, accommodations to support up to 7,000
Mission: Mineral mining and refinery
The final space station, Deep Space Nine, is also the smallest of those examined. Built by the non-human species known as the Cardassians, Deep Space Nine’s original function was mineral and ore processing. Undoubtedly, DS9 has the need for a robust and extensive industrial control system to safely handle that role. The layout of the station is reasonable as well, with a central core surrounded by two concentric rings. The core housed the majority of the facilities including operational, administrative, commercial, industrial, and mechanical spaces. The smaller inner ring provided housing for the crew and visitors, while the outer ring and its massive pylons provide the infrastructure needed for starships to dock with DS9. Again, smart layouts will be necessary for our future real-world space stations.
DS9 residents do not have to worry about a lack of resources thanks to their replicator technology. The capability of converting energy into matter in a controlled manner means the only thing standing between you and a hot meal or a shiny new screwdriver is a simple voice command. Imagine asking your Amazon Echo for an item, and having it 3D printed right in front of you in mere seconds! This type of innovation may very well prove to be important. Getting to space is all about weight. The more you take, the greater the cost of getting it all launched into orbit. Instead of taking spares for everything that goes with us, we could simply take raw materials and fabricate replacement parts on the fly thanks to futuristic additive manufacturing technologies.
Another unique facility aboard DS9 is the holosuite, a sort of immersive virtual reality experience without the necessity of donning cumbersome eyewear. The holosuite provides an entertainment venue for station personnel, which is crucial to their psychological well-being since they are otherwise trapped on a rather static cosmic island. Such amenities that take care of the human mind in addition to the human body will no doubt be critical for mission success in tomorrow’s real-world space stations. This will be especially true as space stations become bonafide colonies and not just orbiting science laboratories.
War. Peace. Commerce. The mission of these space stations may be diverse, but there are many similarities when viewed from a technical perspective. So just as soon as we sort out artificial gravity, tractor beams, directed energy weapons, advanced life control systems, immersive virtual reality, next-generation 3D printers, and a few other technologies, we should be all set for the long-term habitation of space. It will be curious to see which, if any, of these concepts escape the realm of science fiction to become technological fact.
Privacy Centre |
Terms and Conditions
Copyright ©2024 Mouser Electronics, Inc.
Mouser and Mouser Electronics are trademarks of Mouser Electronics, Inc. in the U.S. and/or other countries.
All other trademarks are the property of their respective owners.
Corporate headquarters and logistics centre in Mansfield, Texas USA.