What makes for a harsh environment? For a person, that’s an inhospitable set of conditions that can cause bodily harm over a period of time. Space is one of the harshest environments you can imagine for human beings – even without monstrous alien predators. Pressure, or the lack of it, temperature, or the lack of it, high-energy particles, and radiation are just not very human-friendly. It’s a cold, dark place that lacks many of the creature comforts of home – like water and air.
Space is also one of the most difficult environments in which to design electronics systems. Engineers must take into account harsh environmental conditions when creating new designs one day destine to travel among the stars.
Electrical circuits naturally don’t do well around water and they don’t like temperature extremes, ingress of particulates, electrostatic discharge, electromagnetic interference, vibrations and physical impact.
The specific environmental conditions in which the product will be used will affect the product specifications, and must be determined beforehand. You certainly don't want to be the one who gets called to go repair them. Or do you?
To find out more about how Mouser Electronics is Empowering Innovation, visit
https://www.mouser.com/empowering-innovation/ and explore a range of topics out of this world.
For millennia man has stared at the heavens marveling at the infinite number of celestial bodies that appear nightly in the sky, seemingly suspended in the vast darkness of night. Throughout history, man has pondered many questions regarding the universe’s beginning and whether there is an end to the vastness of space. On December 25, 2021, NASA launched the James Webb Space Telescope (JWST) in hopes of getting that much closer to the answers. Answers that will tell humanity the history of the universe.
The JWST is the largest optical telescope in space. Using the latest technology in its high-resolution sensitive instrumentation, the JWST can observe the furthest reaches of space and uncover secrets hidden deep within the cosmos. The JWST is capable of viewing objects that are too distant and faint for the previous Hubble Space Telescope to observe.
One key feature of the JWST is the ability to conduct infrared astronomy. Infrared astronomy focuses on the observation and analysis of astronomical objects by using infrared (IR) radiation. Conducting IR observations and analysis will enable investigations across many fields of astronomy and cosmology. Scientists and astrophysicists will be able to conduct observations of the first stars and the formation of the first galaxies. In addition, the JWST will allow observers to characterize the atmospheres of potentially habitable exoplanets.
It took an international team of very smart people working very long hours, implementing lots of lessons learned from previous projects, and a huge number of resources to build the JWST. The JWST was built using many integrated technologies. These technologies rely on many different sensors, computer processing power, memory, software, robust microwave transmitters and receivers, precision robotics, exotic metals, exotic shielding, and high-performance interconnects. However, out of all the onboard technology, temperature sensors and high-performance interconnects play critical roles for the entire JWST mission to work.
The JWST's onboard instrumentation, particularly its IR capabilities, require very cold temperatures to function properly. These temperatures are below 50 kelvin (K) (-223°C; -370°F), which are typically achieved with cryo- and diffusion pumps used in semiconductor manufacturing and applications involving liquid helium and nitrogen, such as biomedical applications. However, in space, where the baseline temperature is 2.7K such temperatures can be easily achieved away from radiating celestial bodies such as the Sun or Earth.
To achieve the required operating temperature, the JWST was placed in a solar orbit near the L2 Lagrange point, about 1.5 million kilometers (930,000mi) from Earth. The spacecraft is protected from warming by the Sun, Earth, and Moon with a five-layer sunshield. This remote location ensures that the JWST can remain at the correct operating temperature, allowing it to achieve its maximum potential for groundbreaking astronomical research.
This week's New Tech Tuesday showcases two space certified products from Innovative Sensor Technology and Amphenol Times Microwave Systems. The technology and precision behind both products are examples of what makes advancements in space exploration like that of the JWST possible.
Innovative Sensor Technology 600°C Series Platinum Sensors with Wires
Sensors used in space must fulfill several special characteristics that qualify them for space missions. Innovative Sensor Technology IST AG offers European Space Components Coordination (ESCC) qualified thin-film platinum (Pt) temperature sensors that can be used in a temperature range from -200°C to +200°C and are classified as tolerance class B (IEC 60751 F0.3). These RTDs are available with different resistances ranging from Pt100 to Pt2000 and are available as Engineering Model (EM) or as Flight Model (FM). The sensors can also be configured with extended wires (with an ESCC-qualified wire). Compared to traditionally used wire-wound sensors, these thin-film sensors show various advantages. Most importantly, they offer a less vulnerable surface that can be damaged when the sensor is exposed to constant thermal cycles and occasional vibrations. The thin-film technique, where the platinum resistance structure is firmly connected to the ceramic surface of the sensor, makes these sensors very robust and suitable for applications with frequent temperature fluctuations and vibrations.
Amphenol Times Microwave Systems InstaBend™ 086 Flexible Microwave Assemblies
The new InstaBend™ 086 Flexible Microwave Assemblies from Amphenol Times Microwave Systems are flexible, coaxial, high-performance microwave assemblies that provide a preassembled design for interconnects between RF circuit cards, modules, and enclosure panels. InstaBend assemblies are ideal for in-the-box applications with space constraints, including space flight, thermal vacuum, microwave test, and many other commercial and military applications. The cable can be bent very closely behind the connector, minimizing the footprint, saving space, and simplifying cable routing without the need to protect the back of the connector.
The JWST represents humanity's pursuit of answers to many questions about our existence. Space exploration presents many challenges, including vast distances and extreme conditions, but with precision components like those from Innovative Sensor Technology and Amphenol Times Microwave Systems, future extreme machines like the JWST will continue to be our window into the universe as we explore and try to understand it.
Photo courtesy Star Trek Continues
Like many kids in my generation, I grew up with Star Trek. I’d watch the Original Series in re-runs and later followed the Next Generation in college. I found their take on space exploration fascinating and wondered if we would ever be able to explore the galaxy like Captain Kirk.
In reality, space is one of the harshest environments that we as humans can face. Since we’re used to our oxygen-rich, solar-powered, radiation-shielded, one-g Earth, in order to explore the cold vacuum of outer space, we need a tremendous amount of ingenuity and determination.
Let’s take a look at some of the problems we face with actual manned space travel and see how they solved them in movies and TV as compared to real life.
photo courtesy NASA
Space debris is a problem the ISS, shuttles and satellites have had to deal with on more than one occasion. NASA has strategies and procedures in place to protect both astronaut crews and expensive unmanned missions from the same fate that the International Space Station (ISS) and the Tiangong Space Station faced in the film Gravity. But what about traveling in space? Imagine traveling at high velocities through space and constantly being worried that a fist-sized rock will punch through your hull, decimating your craft and endangering your crew? Of course, the Enterprise has a deflector array to avoid these deep space collisions. Until we learn to master gravitons, we need powerful sensors to look ahead and carefully plotted courses to avoid collisions.
What if something hits your craft and you’re stranded millions of miles from home? Standard radio transmissions from Mars could take up to 21 minutes to reach Earth, and another 21 minutes for the response. That’s a relatively long time to wait for a signal. In the Star Trek universe, they send their long-distance messages at faster than light speeds through Starfleet’s subspace communications array. But until we develop such a technology, we’re stuck with a long-delay conversation. And an even longer wait for help to arrive.
photo courtesy NASA
Another problem that weighs heavily on astronauts’ minds is gravity, or the lack thereof. Astronauts aboard the ISS need to be mindful of the dangers posed to their health in a zero-g environment. While living on Earth, the idea of gravity is as natural as breathing. Our muscles develop enough strength to keep us mobile. Without gravity, our unused muscles begin to atrophy. While floating around in space may be a fun experience for astronauts, without regular exercise, the return to normal gravity could prove very unpleasant physically. So how would travellers from earth survive a five-year mission through space without the benefit of the artificial gravity present in the Enterprise? One solution is artificial gravity through centripetal force. Seen in Stanley Kubrik’s 2001: A Space Odyssey, and possibly similarly experienced by yourself at a carnival, a spinning ring causes an effect similar to gravity due to the rotation and inertia, except that instead of pulling towards the center of mass, it pushes outward. This is similar to the centripetal force observed when spinning a bucket full of water on a string. Water stays in the bucket even when the bucket is upside-down.
Perhaps the greatest challenge of all is overcoming the speed limit of matter and information in the universe as set by Einstein’s Theory of Special Relativity. The Enterprise gets around this rule by using the explosive reaction of matter and anti-matter, focused through dilithium crystals to power its warp core which generates a “warp” field in spacetime, creating a subspace bubble around the ship that allows it to travel at faster than light speeds. But in our world, along with faster-than-light (FTL) travel through space, not only are there incredible energy problems, but a whole set of side effects as well. Moving at great speeds or moving away from gravitational masses causes a strange effect on time to observers, as explored in the recent film Interstellar. Because of this, moving at FTL speeds would cause time to slow down for the traveler, relative to those who are Earthbound. An hour could be years with enough velocity. While this is exciting news for time travel fans, it’s not so great for the friends and families of the travelers themselves.
These and many more challenges are standing in the way of our exploring the cosmos. The beautiful thing about Star Trek and so many other science fiction worlds is that they inspire us to overcome them. A whole generation of engineers, scientists and astrophysicists list Star Trek as the reason for their chosen fields, and with their help, it’s inevitable that humanity will soon see itself traveling through the cosmos, seeking out new life and new civilizations, boldly going where no one has gone before.
The use of very high frequencies for communication has always been “just around the corner,” where, because of significant technical challenges, it’s remained for 50 years. Now, faced with scarce spectrum available at lower frequencies, the wireless industry is determined to overcome those challenges.
The fifth generation of wireless communications, commonly called “5G”, will be a technology tour de force once it’s rolled out after 2020. One of its most impressive accomplishments will be the use of radio frequencies far, far higher than have ever been used for cellular networks or almost anything else, a region called millimeter wavelengths. This is a very big deal for many reasons. But first, it’s important to understand how these frequencies fit into the spectral landscape, how they differ from their lower-frequency counterparts, and why except for satellite communications, vehicle radar, and defense systems, they’ve not been used.
The electromagnetic spectrum is massive in its breadth, covering frequencies from 1Hz (the domain of lightning) to above 10 exahertz (10EHz, or 10 quintillion hertz, where gamma rays live). The region between about 1MHz and 300GHz is where communications can take place, of which only a tiny portion is actually used (1MHz to 30GHz). And of that, less than 20 percent (about 6GHz) is densely populated by wireless communications systems including shortwave, AM, FM, TV broadcasting, cellular, amateur radio, and land mobile radio, as well as assorted unlicensed applications like Wi-Fi, Bluetooth, ZigBee, microwave ovens, and industrial and medical systems. This is the “sweet spot” where communications over significant distances can be accomplished, ranging from spanning the world at the low end to a few miles at the high end (Figure 1).
Figure 1: The millimeter-wave portion of the spectrum, also called Extremely High Frequency (EHF) is generally defined as beginning at 30GHz and extending to 300GHz. (Source: NASA)
However, the swath of spectrum between about 150MHz and 1GHz is prime spectral real estate because it poses fewer challenges such as signal loss, the ability to penetrate buildings, glass, and other impediments, while offering wide-area coverage. So it’s not surprising that this region accounts for almost all communications throughout the world, an increasing problem as there is precious little unused spectrum left. Thus, the FCC can auction remarkably narrow slices of this spectrum for billions of dollars.
So now the powers that determine the future of wireless communications have determined that it will collectively venture into the relatively unexplored spectral frontier as high as 60 GHz and eventually even 100 GHz, which is 10 to 100 times higher than ever before. On the face of it this is certainly logical, but as always the devil is in the details, and they account for why millimeter-wave frequencies have never been widely used.
Frontiers are meant to be explored, and space—it is said—is the final frontier. In many ways, the exploration and long-term settlement of our solar system will be entirely different from the European colonization of the Americas or the later western expansion of the United States. The extreme environment of space, the vast distances, lack of indigenous resources, and the mere act of leaving Earth’s surface will present challenges unique to our “star-ward” journeys.
Despite these new complications, we can still look back to America’s westward expansion of the 1800s to find some ideas that might help our next generation of explorers. For example, yesterday's pioneers relied heavily on forts and outposts to offer refuge from the challenges they faced. The adventurers of the 21st century might need to rely on something similar—a space station.
Humanity has already launched and sustained quite a few space stations, though all have remained tightly in the grasp of Earth’s gravity. The Soviet Union had the Almaz and Salyut space station programs, and later the Mir space station. The United States’ first space station was Skylab, which remained in orbit from 1973 to 1979.
Fast forward to today, and we have the International Space Station (ISS) whizzing around the Earth at 27,600km/h (that comes out to about 17,100mph). A crew of six multinational astronauts calls the ISS home on any given day. China launched their first space station, the Tiangong-1, in 2011, but they have not sent any new crew members to the station since 2013.
Many innovations have been tested out on the ISS including inflatable habitation modules, robotic crew members, autonomous drones, and zero gravity 3D printers, to name a few. These tools might become more commonplace on next-generation space stations. Russia, China, and the United States have all announced planning efforts for followups to both the ISS and Tiangong-1. Cooperation between these three countries remains tentative at best, but all plans look to a mid-2020 timeframe for the launch of the new space stations. Meanwhile, the European Space Agency has been actively seeking plans for a lunar colony as well.
There is no doubt that the next-generation space station will be technologically more forward regardless of what path is taken. However, what can we expect from the space stations that will exist in 100 years? Or 200 years?
To answer that question, we might consider closing our history textbook and turning on some science fiction.
Babylon 5, Deep Space Nine., and the Death Star; arguably these are three of the most well-known space stations of science fiction lore. There have also been many takes on the ring-like space station such as Space Station 5 from Stanley Kubrick's masterpiece 2001: A Space Odyssey, and the more recent Elysium. If you are looking for a few laughs, there is the Satellite of Love made famous by the comedy show Mystery Science Theater 3000. But for now, let’s focus on the big three.
So how do these celestial outposts compare?
Size: 120km (diameter)
Personnel: Approximately 2.5 million
Mission: Military weapons system
The sheer size of the Death Star from Star Wars makes it both awe-inspiring and also highly improbable that we would ever attempt to actually build a version of it. Some folks have tried to estimate the costs of such an endeavor and they range from a few a hundred quadrillion dollars to a few dozen sextillion dollars. A sextillion is a one followed by 21 zeros. Sort of makes the US.deficit seem rather insignificant in comparison.
From a technical perspective, building the super laser would probably be the greatest challenge. Generating and storing the amount of energy needed to destroy a planet in a single blast of deadly flaming green radiation would require a significant engineering effort. Perhaps our future space stations would benefit from less powerful but still potent directed energy weapons to destroy debris that posed a potential impact danger.
Another key feature of the Death Star is the tractor beam, much to the chagrin of intergalactic smugglers. Tractor beams allow for controlled and precise movement of inbound spacecraft carrying supplies and personnel to the Death Star. After all, nothing ruins your day like a runaway space shuttle colliding with your inhabited space station. Might we employ such technology on our future space stations?
Size: 8km (length)
Personnel: Approximately 250,000
Mission: Diplomatic hub
While the Death Star is an instrument of war, the Babylon 5 space station is a tool for peace. It also has a diminutive stature compared to the Death Star. The design of Babylon 5 was inspired by physicist Gerard K. O'Neill’s vision for space stations, a concept known now as O’Neill cylinders. An enormous motor built using electromagnetic bearings generates a rotation of the central cylinder, which results in artificial gravity for the station inhabitants. Artificial gravity might very well be necessary for future explorers so as not to be afflicted by health issues that arise from prolonged exposure to zero gravity. NASA astronaut Scott Kelly recently returned from a year aboard the ISS and noted some minor but still significant medical challenges resulting from the absence of Earth’s gravitational pull on his body.
The design of Babylon 5 is surprisingly well thought-out from an engineering perspective. It is divided into six sectors, each with their own functionality. The Blue Sector contains facilities for command and control, administration, medical care, and docking bays. The Red Sector provides the housing accommodations and recreational facilities. Food production and environmental systems are controlled from the Green Sector. Mechanical systems are housed in the Grey Sector, while the station’s power plant is located in the Yellow Sector. Lastly, the Brown Sector contains facilities for transients as well as for manufacturing, maintenance, and waste handling. In short, it is a small, self-contained city. Future engineers might very well look to this type of segmentation in designing larger, more long-term space-based habitats.
Babylon 5 was built to serve as a neutral location for a variety of species to interact. Thus, the environmental control systems of the station must be dynamic enough to meet the needs of aliens with a variety of life-sustaining atmospheric needs. A monorail train runs the length of the rotational axis of the station by taking advantage of the fact that an O’Neill cylinder has zero gravity along that axis. Babylon 5 also features a double hull to give itself protection from any damage that could be caused by impacts from meteorites and other small debris. While we might not have a need for mass transit in our future space stations, protection from debris and efficient environmental control systems are absolute must-haves.
Deep Space Nine at a Glance:
Size: 1451.82m (diameter)
Personnel: 300 permanent inhabitants, accommodations to support up to 7,000
Mission: Mineral mining and refinery
The final space station, Deep Space Nine, is also the smallest of those examined. Built by the non-human species known as the Cardassians, Deep Space Nine’s original function was mineral and ore processing. Undoubtedly, DS9 has the need for a robust and extensive industrial control system to safely handle that role. The layout of the station is reasonable as well, with a central core surrounded by two concentric rings. The core housed the majority of the facilities including operational, administrative, commercial, industrial, and mechanical spaces. The smaller inner ring provided housing for the crew and visitors, while the outer ring and its massive pylons provide the infrastructure needed for starships to dock with DS9. Again, smart layouts will be necessary for our future real-world space stations.
DS9 residents do not have to worry about a lack of resources thanks to their replicator technology. The capability of converting energy into matter in a controlled manner means the only thing standing between you and a hot meal or a shiny new screwdriver is a simple voice command. Imagine asking your Amazon Echo for an item, and having it 3D printed right in front of you in mere seconds! This type of innovation may very well prove to be important. Getting to space is all about weight. The more you take, the greater the cost of getting it all launched into orbit. Instead of taking spares for everything that goes with us, we could simply take raw materials and fabricate replacement parts on the fly thanks to futuristic additive manufacturing technologies.
Another unique facility aboard DS9 is the holosuite, a sort of immersive virtual reality experience without the necessity of donning cumbersome eyewear. The holosuite provides an entertainment venue for station personnel, which is crucial to their psychological well-being since they are otherwise trapped on a rather static cosmic island. Such amenities that take care of the human mind in addition to the human body will no doubt be critical for mission success in tomorrow’s real-world space stations. This will be especially true as space stations become bonafide colonies and not just orbiting science laboratories.
War. Peace. Commerce. The mission of these space stations may be diverse, but there are many similarities when viewed from a technical perspective. So just as soon as we sort out artificial gravity, tractor beams, directed energy weapons, advanced life control systems, immersive virtual reality, next-generation 3D printers, and a few other technologies, we should be all set for the long-term habitation of space. It will be curious to see which, if any, of these concepts escape the realm of science fiction to become technological fact.
It’s not exactly cheap as chips to build a quantum computer today. Indeed, they’re currently pretty pricey machines that are only being built and operated by a few key players. As quantum computing gains buzz and attention, however, and as smaller companies start getting involved, some believe quantum computing might eventually see the same commoditization as classical computers before them. Investors in the space, however, disagree.
Quantum computers are popular with the tech press owing to their potential to solve problems of incredible complexity across a plethora of industries ranging from pharmaceuticals to finance, climate change to cancer, chemical compounds to cybersecurity.
While classical computers use bits—ones and zeros—quantum computers use qubits that work on a principle of superposition, which means they can be zeros, ones, both or neither at the same time, giving rise to far more potential answers because multiple calculations are happening simultaneously with multiple inputs.
The promise of quantum computing is that once the right number of qubits has been achieved, cracking these answers could come down to a few mere hours of compute time—a clearly tantalizing proposition. The problem, though, is that qubits are still fairly unstable and even the most advanced companies in the quantum computing space are still hovering around the 50 qubit mark, whereas experts believe it will take hundreds, if not thousands of qubits to really bring quantum computing to useful fruition. McKinsey, in its “A Game Plan for Quantum Computing” report notes “many companies and businesses won’t be able to reap significant value from quantum computing for a decade or more, although a few will see gains in the next five years.”
“Since the technology is nascent, progress may be slow: our estimate is that by 2030 only 2,000 to 5,000 quantum computers will be operational. Since there are many pieces to the quantum computing puzzle, the hardware and software needed to handle the most complex problems may not exist until 2035 or beyond,” continues the report.
This hasn’t dissuaded investors or startups one, er, qubit.
“Quantum computing has the potential to change the world in ways no other computing technology has. Given its potential, I’d like to see it in as many hands as possible,” said William Hurley, co-founder of Quantum Computing startup Strangeworks. Hurley is also of the opinion that Quantum Computing will see commoditization eventually, just like classical hardware and data centers before them.
“When any revolutionary new technology is introduced it carries a premium price. As adoption spreads and more competitors enter the market prices generally fall. This is a cycle we've seen for decades. I don’t think quantum computing hardware can break this trend. These machines will start off being very costly, and end up dropping to thousands of dollars an hour, then hundreds, and then finally being sold at prices similar to those of GPUs or TPUs today.”
Indeed, it could be argued that the industry is already seeing some form of commoditization take root as ever more hardware vendors enter the market, while major players like Amazon Web Services (and soon Microsoft) fractionalize machine time while driving the prices of access to some machines virtually to under $1 a task.
Tomer Diari of Bessemer Venture Partners disagrees. “Commoditization of Quantum Computing hardware? Honestly, I do not see that happening. I think that we are going to see the exact opposite. We are going to see what is currently a very wide race among a couple dozen vendors trying to build a Quantum processor narrowing down and consolidating into the hands of perhaps three, four, five or six credible vendors working on this challenge.”
Diari believes that, at the end of the day, these few companies alone will control the technology, and the hardware stack, and will therefore be able to dictate the terms for at least the next decade or so.
Jay Gambetta of IBM agrees with Diari. “I do not see it getting commoditized in the near foreseeable future because of the dependence on the hardware,” he explained adding that it would likely never be comparable to a laptop or even a data center, and that building software for Quantum Computing was no trivial task and would require many layers of abstraction. “These are going to be specially-built machines for solving certain problems, and there’s not a lot of room for commodifying that. You’re not going to be checking your email on it.”
Investors seem to be in agreement and have been dropping large down rounds on some of the more promising quantum computing startups, as well as increasing their investments in larger, more established players in the industry.
That said, investing in quantum computing is far from a no-brainer. “It is really difficult to push an investment opportunity in a company building technology that you cannot even grasp and quantum computing is a technology that is incredibly difficult to get your mind around. It’s also very difficult to tell what sort of modality is going to prevail. What is the right way forward? How do you even Benchmark companies against each other when they are all currently developing very different technology stacks?” mused Diari. “It is almost like it must have been for semiconductor investors before semiconductors were a thing.”
While it may be tempting to invest in the software part of the stack, Diari said this approach had some problems because at the end of the day, only a handful of vendors would have the resources and ability to build the hardware, and it would be only those companies that would end up determining the entire stack, thus making investing in software layers prematurely a risky endeavor. “At the end of the day, those few companies will control the technology, they will control the stack and they are going to dictate the terms of this industry for at least the next decade or so.”
Hurley agrees that investors should be taking a long-term view on quantum computing investment. “Time is absolutely the most important thing. Most venture funds run for 10 years. If you’re at the very beginning of that quantum may not be the right investment for you. This isn’t a sprint, and it’s not a marathon, it’s an ultra-marathon and investors should be taking a long term view on any investments they make in the space,” he said.
Meanwhile, as several quantum computing players punt their offerings via the cloud, adoption and demand is set to increase, with many projecting that, at least for now, those wanting to experiment in the quantum sandbox may opt for a hybrid approach, coupling classical computing with a kick from quantum. Will that lead to commoditization down the line? Maybe or maybe not. But as Hurley points out, “commoditization would be good for consumers.”
(Source: Elena - stock.adobe.com)
We are in the midst of a new manufacturing renaissance.
The earliest days of the Fourth Industrial Revolution—or Industry 4.0—introduced us to the potential of automation. Through devices such as programmable logic controllers, businesses were able to considerably streamline their manufacturing processes. Yet these systems were largely islands of automation—disconnected individual stations unable to effectively communicate with one another, substantially limiting their effectiveness.
The proliferation of the Industrial Internet of Things (IIoT) changed this. Data flows seamlessly across a network of interconnected stations, sensors, and platforms. Through monitoring and analytics, manufacturers gain deeper insight into their systems and processes than ever before by identifying bottlenecks, addressing system issues, and uncovering opportunities for optimization.
Yet even this hyperconnectivity is simply another phase in the evolution of Industry 4.0. As technology continues to evolve, manufacturing finds itself aligning with the aerospace, automotive, MedTech, and even consumer electronics sectors in a shift towards miniaturization. The drive to make things smaller and ultimately more powerful.
That’s something most people tend to overlook when the topic of miniaturization comes up. It’s not simply about making things smaller for the sake of it. It’s about making things smaller so we can get more out of them.
The potential applications of such an initiative in manufacturing alone are nearly infinite:
As with the transition towards automated manufacturing, the mini-movement for connectors has been happening for some time. For years, it has been a matter of efficiency, a means by which factory designers can do more with less. Yet as we move inexorably towards even further miniaturization, it’s rapidly becoming a matter of necessity. Therein lies the problem.
Devices resident on the factory floor must transmit and receive vast amounts of data in harsh industrial environments. They must be capable of surviving extreme heat, intense cold, immense pressure, jarring vibration, and crushing pressure. Regardless of environmental conditions, connectors must maintain their electromechanical integrity even after multiple mating cycles.
More importantly, they must achieve this as inexpensively as possible.
That poses a challenge where ruggedization is concerned. Hardening a system or sensor to withstand extreme conditions traditionally means adding not-inconsiderable bulk to that system. Yet as we’ve already established, miniaturization in modern manufacturing is a necessity. Factory automation designers must constantly miniaturize to improve efficiency and do more with less.
Here’s where TE Connectivity (TE) comes in. For more than 75 years, TE has been a global industrial technology leader in creating a safer, more sustainable, more productive, and more connected future (which are hallmarks of Industry 4.0). TE’s connectivity and sensing products have a proven track record of rugged performance in the harshest environments while meeting the industry’s drive for smaller and smaller components.
I recently sat down with TE Connectivity’s Sameer Trikha, Sr. Mgr. Product Management, Industrial, and Jana New, Americas Product Manager, Industrial, to discuss how TE Connectivity addresses the unprecedented challenges of Industry 4.0. The following sections outline TE’s approach to Industry 4.0 and their featured miniaturized connectors designed to support this phase of the industrial revolution.
TE launched a new series of connector solutions designed specifically to address the unprecedented challenges of miniaturized manufacturing. From the start, we knew we needed to provide the same capabilities and patented technologies that made their original connectors reliable without sacrificing scalability, durability, or functionality.
This recognition culminated in the release of our Micro-MaTcH line, which eliminates the need for costly gold-plated connection points, and offers pitch ranges as low as 1mm. We’ve already rolled out many solutions in this new miniaturized niche, aiming to provide robust, cost-effective technology for as many sectors as possible. Our core applications in that regard include:
With our September 2021 acquisition of ERNI Group AG, we also recently expanded into several new markets. As a leading manufacturer of high-speed, fine-pitch connectors, ERNI’s addition to our portfolio not only closes several gaps within it, but considerably strengthens our own industrial automation capabilities. Leveraging ERNI’s technology and expertise, we can expand into new sectors including aerospace, automotive, industrial, and medical.
As mentioned, we’ve released several new connector solutions developed with miniaturization in mind. The easiest way to drill down to what makes them unique and establish our specific role in Industry 4.0 is to take a quick look at each. There are three in total.
The first two, AMPMODU Small Centerline and AMPMODU 50/50 will see their most frequent use in PLCs, industrial automation, data centers, and IIoT. The latter, Micro-MaTcH, is intended for cost-constrained situations and use cases.
Containing support for up to 100 positions of 1.0 amps each, this family of connectors features a 1.0mm centerline, and takes up approximately 85 percent less space than traditional connectors. Their robust design includes non-protrusive hold-downs, which both provide retention during soldering and long-term strain relief during high mating cycles.
Used in board-to-board and cable-to-board applications, connectors in the AMPMODU 50/50 family feature 1.27mm pitches, 30µ gold plating, and polarization to protect against incorrect mating. Expected to expand considerably in the future, this family includes a series of surface mounted, double row, vertically mounted units, and feature dual-beam contact—not typically seen in connectors of this type. They can carry up to 3.0 amps and operate over an exceptionally wide -65ºC to 105ºC temperature range.
Designed with cost-constrained applications in mind, this group of patent-protected connectors does not feature gold coating. Ordinarily, this would open the door to fretting brought on by micro-movements from vibration, thermal expansion, or thermal contraction. Fretting often results in poor electrical connection, representing a leading cause of system failure.
TE prevents fretting via a positioning spring within the female contact. This unique mechanism creates a gas-tight connection between connector and receptacle, halting the aforementioned micro-movements. No other company has yet been able to replicate this patented innovation as effectively as TE.
The Fourth Industrial Revolution is here, and miniaturization represents the next major evolution of Industry 4.0. TE intends to play a major role in seeing that evolution through. With an ever-growing, robust portfolio of high-quality connector solutions—such as the AMPMODU Small Centerline with its 30µ gold plating contact option, the AMPMODU 50/50 series suitable for high-density requirements, and the newly developed and unique, gold-free Micro-MaTcH family of connectors and cables—and an approach to miniaturization that doesn’t sacrifice the quality of TE’s original design, TE is well-positioned to be your one-stop-shop for all your engineering interconnect needs.
Nick St. John received his BE in Electrical Engineering at Stony Brook University in 2019. Currently, Nick is working as an ASIC Design Engineer at Brookhaven National Laboratory. He is also pursuing a PhD in Electrical Engineering at Stony Brook University where he is specializing in analog integrated circuit design.
Privacy Centre |
Terms and Conditions
Copyright ©2024 Mouser Electronics, Inc.
Mouser® and Mouser Electronics® are trademarks of Mouser Electronics, Inc. in the U.S. and/or other countries.
All other trademarks are the property of their respective owners.
Corporate headquarters and logistics centre in Mansfield, Texas USA.