What Robots Actually Are
The Word Nobody Agrees On
Ask ten roboticists what a robot is and you will get twelve answers.
A six-axis arm bolting doors onto Teslas at the Fremont plant? Obviously a robot. A Roomba bumping around your living room? Sure. A surgical system that lets a doctor in New York operate on a patient in Zurich? Most people would say yes. An algorithm that writes marketing copy? Almost nobody in the field would call that a robot, but the press does it anyway.
The confusion is not just semantic. It costs real money. Investors lump together companies building physical machines with companies building software chatbots under the same “robotics and AI” umbrella, then wonder why the hardware company burns cash ten times faster and takes five times longer to reach scale. Operators adopt a “robotic process automation” tool expecting the capabilities of an actual robot, then discover they have purchased a macro with a marketing budget.
The word “robot” entered English in 1920, when Czech playwright Karel Čapek premiered R.U.R. (Rossum’s Universal Robots)1, a play about artificial workers who overthrow their creators. The Czech word robota means forced labor. From the start, the concept carried an assumption that has never quite gone away: a robot is a thing that replaces a human worker. That assumption shapes public debate, regulatory frameworks, and a surprising number of pitch decks. It is also incomplete to the point of being misleading.
Here is a more useful starting point. A robot is a physical machine that senses its environment, processes that information to make decisions or follow instructions, and takes physical action in the world. Three words: sense, think, act2. If a system does all three in the physical world, it is a robot. If it is missing any one of the three, it is something else (a sensor, a computer, or a power tool, respectively, and all perfectly fine things to be).
This is the framework we will use throughout this Core. It is not the only possible definition, but it is the one that best serves people who need to evaluate robotics companies, deploy robotic systems, or understand where the industry is heading.
Sense, Think, Act
Every robot, from a $500 hobbyist arm to a $2 million surgical system, runs on the same basic loop. It gathers data about itself and its surroundings. It processes that data to decide what to do next. Then it does something physical: it moves, grips, welds, cuts, flies, drives, or simply holds still with precision.
The loop repeats. Hundreds of times per second in a fast application, a few times per minute in a slow one. Each cycle, the robot updates its understanding of the world and adjusts its actions accordingly.
Consider a warehouse robot picking items off a shelf. The “sense” phase involves cameras identifying the target object, a depth sensor measuring the distance to the shelf, and force sensors in the gripper confirming whether something is actually in hand. The “think” phase involves software deciding which object to pick first, planning a collision-free path for the arm, and calculating how much grip force to apply. The “act” phase involves motors driving the arm along that path and the gripper closing with the right pressure. The accompanying figure traces this cycle through a single pick: sensors scan the bin, software selects a target and plots a path, motors execute the grasp — then the loop resets for the next item.
Operational Concept — The Sense-Think-Act Cycle
Type: illustration
Shows: The three-phase sense-think-act loop visualized around a warehouse picking robot — camera/depth sensor fields (sense), path planning and grip calculation overlays (think), and motor-driven arm motion with gripper engagement (act) — connected by continuous cycle arrows.
Data: Warehouse bin-picking application with specific subsystem callouts per phase
Why: The reader sees that sense-think-act is not three sequential steps but a continuous loop, and sees what each phase physically looks like in a real deployment.
Figure ID: fig-01-01
Priority: A
What makes the loop interesting, and what makes robotics genuinely hard, is that each phase constrains the others. Bad sensors produce bad data, which means the “think” layer works from a distorted picture of reality. Slow computation means the robot’s decisions lag behind a changing environment. Imprecise actuators mean that even perfect perception and perfect planning produce imperfect results. The weakest link in the chain determines the system’s overall capability, and identifying that weakest link is one of the most valuable skills in robotics evaluation.
We will go deep on each phase in the modules that follow. Module 2 covers the “act” layer (mechanical systems and actuators). Module 3 covers the “sense” layer (sensors and perception). Module 4 covers the “think” layer (control systems and software). For now, the point is that these three functions are inseparable. A robot that can see perfectly but can’t move precisely is useless for surgery. A robot with extraordinary dexterity but no perception is useless outside a rigidly controlled environment.
Robotics Is Not AI (But It Needs AI)
This distinction matters more than almost anything else in the space, and it is the one most commonly botched in popular coverage and investor presentations alike.
Artificial intelligence is software that performs cognitive tasks: recognizing faces, translating languages, predicting equipment failures, generating text. It runs on servers, laptops, and phones. It does not, by itself, interact with the physical world. A large language model has no arms. A computer vision system has no legs. An AI that beats the world champion at Go has never picked up a physical game piece.
Robotics is the engineering of physical machines that act in the world. For most of the field’s history (roughly 1960 through 2015), robots operated with minimal intelligence. An industrial arm on a car assembly line followed the same programmed path thousands of times per day. It did not “think” in any meaningful sense. It executed instructions, and those instructions were painstakingly written by a human programmer who specified every millimeter of movement.
The two fields are converging fast. Modern robots increasingly rely on AI for perception (using machine learning to identify objects), for planning (using reinforcement learning to figure out how to grasp novel items), and for adaptation (using foundation models trained on millions of examples to handle situations they have never explicitly seen before). Boston Dynamics’ Atlas humanoid, Hyundai’s showcase for physical AI, uses learned behaviors to navigate terrain that would have required years of hand-coded rules a decade ago3.
The overlap is growing. A new category of company, sometimes called “physical AI” or “embodied intelligence,” is trying to build general-purpose AI that controls robots across many tasks. Figure AI, which raised $675 million in its Series B in early 2024 and followed with a $1 billion+ Series C at a $39 billion valuation in September 20254, is betting that a single AI system can make a humanoid robot useful in factories, warehouses, and eventually homes. Whether that bet pays off depends on solving problems in both AI and robotics simultaneously, which is exactly why it is so expensive and so uncertain.
For the purposes of this Core, we treat AI as one of several enabling technologies that make robots more capable. It is not the whole story. A robot with brilliant AI and a flimsy gripper still drops things.
The Five Subsystems
Every robot, regardless of form factor or application, contains some version of five core subsystems. Knowing what they are gives you a framework for evaluating any robotics product or company:
1. Sensors collect data about the robot and its environment. Cameras, LIDAR (Light Detection and Ranging) units, force sensors, encoders that track joint positions, temperature probes, microphones. Some are pointed outward (what is happening around me?) and some are pointed inward (what are my own joints doing?). We will cover these in depth in Module 3.
2. Computation and control processes sensor data, runs algorithms, and sends commands to the actuators. This can range from a simple microcontroller executing pre-programmed routines to a full onboard computer running real-time perception, planning, and learning systems. Module 4 covers this layer.
3. Actuators produce movement. Electric motors (the most common in modern robots), hydraulic cylinders (for applications requiring enormous force), or pneumatic systems (for lighter, faster motion). The actuator determines how strong, fast, and precise the robot’s movements can be. Module 2 goes deep here.
4. End effectors are the tools at the business end of the robot: the gripper that picks up a box, the welding torch that joins metal, the suction cup that lifts a silicon wafer, the scalpel that makes an incision. The end effector determines what tasks the robot can actually perform. A robot arm without an end effector is an expensive paperweight.
5. Power keeps everything running. Batteries for mobile robots, wall power for fixed industrial arms, sometimes hydraulic power units for heavy machinery. Power constrains where the robot can operate, how long it can work, and how much it can carry (because the battery itself has weight). Module 6 covers power systems.
The figure maps all five onto a single cobot — the Universal Robots UR10e — showing where each subsystem physically resides and why a failure in any one constrains the whole machine.
Hardware Diagram — The Five Subsystems of a Robot
Type: diagram
Shows: A Universal Robots UR10e with callout lines identifying all five subsystems at their physical locations — sensors (wrist force-torque sensor, joint encoders), computation (base control box), actuators (servo motors inside joints), end effector (tool flange with gripper), and power (AC cable at base).
Data: UR10e technical specifications; five-subsystem framework from this module
Why: The reader sees where each subsystem physically lives on a real robot, transforming an abstract numbered list into a spatial map they can reference throughout the Core.
Figure ID: fig-01-02
Priority: A
These five subsystems interact in ways that make the whole system harder to build than any individual component. That interaction is the subject of Module 7 (The Integration Challenge), and it is the reason robotics companies burn through cash in ways that pure software companies do not.
Why Atoms Are Harder Than Bits
There is a reason that software ate the world before robots did. Software scales at near-zero marginal cost. You write the code once, copy it a billion times, and distribute it over the internet in seconds. If the code has a bug, you push a patch. If the market shifts, you pivot the product with a few sprints of engineering work. The feedback loop from idea to deployed product can be measured in weeks.
Robots live in the physical world, and the physical world is unforgiving.
Gravity pulls things down. Friction wears surfaces. Impacts break components. Temperature warps tolerances. Dust clogs sensors. Water corrodes electronics. Every one of these problems must be solved not in simulation but in the actual environment where the robot operates: a factory floor, a farm field, a hospital operating room, the bottom of the ocean.
Manufacturing a robot means sourcing hundreds of components from dozens of suppliers, machining parts to tight tolerances, assembling them in a clean environment, and testing each unit individually. A software update ships instantly to every user; a hardware revision requires retooling a production line, which takes months and costs millions.
The split-panel figure illustrates why: a pristine lab eliminates the variables that a warehouse, farm, or factory floor restores with a vengeance.
Operational Concept — Lab vs. Field: The Last Inch
Type: illustration
Shows: Split-panel comparison — left panel shows a cobot arm picking a single object in a clean, brightly lit lab ("95% success rate" overlay); right panel shows the same arm in a cluttered real-world deployment with variable lighting, unexpected objects, a nearby human, and ambient disruption ("60% success rate" overlay). Bold "THE LAST INCH" label between panels.
Data: Field-vs-lab performance gap cited in the module (95% → 60%)
Why: The reader sees viscerally why the gap exists — the real world is messy in ways a lab is not — and will remember this image every time a founder says "it works."
Figure ID: fig-01-05
Priority: C
This is not an argument against robotics. It is an argument for understanding what you are getting into. The companies that succeed in robotics are the ones that respect the physics, budget for the iteration cycles, and build organizations that can execute across software, hardware, and field operations simultaneously. That is a rare combination, and it is why the robotics industry, despite decades of progress, still has an installed base that is small relative to the global workforce.
The International Federation of Robotics reported that the global market for industrial robot installations hit a record $16.7 billion in 20256. Amazon operates more than 750,000 mobile robots across its fulfillment network7. These are real numbers. But they represent a fraction of what is possible, and understanding why the gap exists between potential and deployment is one of the central questions of this entire Core.
A Working Taxonomy
Roboticists categorize robots in dozens of ways: by application, by form factor, by industry, by size, by payload. Most of these taxonomies are useful for specialists and confusing for everyone else.
For the purposes of this Core, we use two axes that matter most for evaluation and investment decisions.
Axis 1: Autonomy level. How much human involvement does the robot require during operation?
At one end: teleoperation, where a human controls every movement (think bomb disposal robots or underwater ROVs). The robot contributes its physical capabilities; the human contributes all the intelligence. At the other end: full autonomy, where the robot perceives, decides, and acts without human input for extended periods (a Mars rover, for instance, because the communication delay makes teleoperation impractical). In between lies a spectrum that includes pre-programmed automation (the industrial arm that repeats a fixed routine), supervised autonomy (the self-driving car with a safety driver), and collaborative operation (cobots working alongside humans on assembly lines).
Axis 2: Mobility. Is the robot fixed in place or does it move through the environment?
Fixed robots (industrial arms, surgical systems, benchtop assembly stations) operate in a defined workspace. Their world is constrained and largely predictable. Mobile robots (warehouse AGVs, delivery drones, self-driving trucks, humanoids) must navigate unstructured environments where things change constantly.
The combination of these two axes produces the interesting categories. A fixed, pre-programmed industrial arm is the workhorse of automotive manufacturing: low autonomy, no mobility, but extremely productive in its narrow domain. A mobile, highly autonomous humanoid is the moonshot bet that Figure AI, Tesla, and Agility Robotics are chasing: maximum autonomy, full mobility, but not yet proven at commercial scale. Between those extremes lies an enormous range of commercially viable products. The positioning map plots these categories on the two axes that matter most, revealing the industry’s trajectory: every generation pushes toward the upper right — more autonomous, more mobile, and exponentially harder to engineer.
Here is where the money is, as of early 2026:
| Category | Autonomy | Mobility | Example | Market Maturity |
|---|---|---|---|---|
| Industrial arms | Pre-programmed | Fixed | FANUC, ABB, KUKA | Mature, $16.7B/yr |
| Cobots | Supervised | Fixed | Universal Robots, FANUC CRX | Growing fast, ~10% of industrial installs8 |
| Warehouse AMRs | High | Mobile (wheels) | Amazon Robotics, Locus, 6 River | Scaling, 750K+ units at Amazon alone |
| Surgical systems | Teleoperated | Fixed | Intuitive (da Vinci), Medtronic Hugo | Established, high-margin |
| Delivery drones | High | Mobile (air) | Zipline, Wing | Niche but expanding |
| Humanoids | Variable | Mobile (legs) | Boston Dynamics Atlas, Figure, Agility Digit | Pre-commercial |
This table is a snapshot. The categories are shifting. Industrial arms are gaining sensors and software that push them toward higher autonomy. Cobots are gaining mobility. Warehouse robots are gaining manipulation capabilities (arms mounted on mobile bases). The trend is toward more autonomy and more mobility simultaneously, which is why the integration challenge (Module 7) is the central engineering problem of the field.
The scene in the accompanying figure captures the division of labor at scale: hundreds of mobile robots ferry shelving pods across the floor, but at the picking station, a human hand still does the work that perception and manipulation systems cannot yet match.
Narrative Scene — Amazon Fulfillment Center with Kiva Robots
Type: illustration
Shows: An Amazon fulfillment center floor viewed from a slightly elevated angle — multiple Kiva-derived mobile robots carrying tall shelving pods toward a human picking station, where a worker reaches into a delivered pod. Background robots at smaller scale convey the 750,000-unit fleet size.
Data: Amazon Robotics fleet data; Kiva-derived mobile robot form factor; fulfillment center layout
Why: The reader sees what the largest robotics deployment in history actually looks like on the ground — and sees the critical division of labor: robots carry, humans pick.
Figure ID: fig-01-04
Priority: B
Strategic Takeaways
-
The sense-think-act framework is the single most useful lens for evaluating any robotics company or product. If you can identify which part of the loop a company excels at, which part it struggles with, and which part it outsources, you understand its competitive position.
-
Robotics and AI are distinct fields with increasing overlap. Confusing them leads to applying software economics to hardware businesses. The cost structures, timelines, and risk profiles are fundamentally different.
-
Physical-world constraints (gravity, friction, breakage, power) impose costs and timelines that pure software companies never face. Respect the physics or get burned.
-
The autonomy spectrum matters more than binary “autonomous vs. not” classifications. Where a robot sits on this spectrum determines its business model, its regulatory path, and its addressable market.
-
Most value in robotics accrues at the integration layer, not at the component level. The company that can make all five subsystems work together reliably in a specific application is the one that captures margin. We will explore this in detail in Module 9 (Economics of Deployment).
Key Terms Introduced
- Robot: A physical machine that senses its environment, processes information, and takes physical action
- Sense-think-act loop: The foundational operational cycle of any robot
- Actuator: A component that converts energy into physical motion
- End effector: The tool attached to the end of a robot that interacts with the environment
- Degrees of freedom: The number of independent axes along which a robot can move
- Autonomy spectrum: The range from full teleoperation to full autonomy
- Teleoperation: Direct human control of a robot from a distance
- Cobot (collaborative robot): A robot designed to work safely alongside humans
- Mobile manipulator: A robot combining a mobile base with manipulation arms
Related Content
- [MODULE] Bodies That Move: Goes deeper on the actuators and mechanical systems introduced in this module’s “Five Subsystems” section
- [MODULE] Sensing and Perception: Full technical treatment of the “sense” layer of the framework
- [MODULE] Economics of Deployment: Explores why integration cost, not hardware, determines robotics ROI
References
- ↩︎
Čapek, K. (1920). R.U.R. (Rossum’s Universal Robots). Premiered at the National Theatre, Prague, January 25, 1921. English translation by Paul Selver (1923). — The play that gave English the word “robot” ends not with dystopia but with the robots learning to love. A century later, the labor-replacement anxiety Čapek dramatized is still the first question investors ask about every robotics company — and still the wrong framing for understanding where value actually accrues.
- ↩︎
Murphy, R. (2019). Introduction to AI Robotics. 2nd ed. MIT Press. — The standard graduate textbook that formalizes the sense-plan-act (or sense-think-act) paradigm as the foundational architecture for autonomous robots. Murphy’s framework is used throughout this Core because it maps cleanly onto the evaluation question every investor needs to answer: which part of the loop is this company good at, and which part will kill them?
- ↩︎
Boston Dynamics. (2024). “Atlas | Partners in Parkour” and “All New Atlas.” Boston Dynamics YouTube, various 2024 releases. — The electric Atlas (unveiled April 2024, replacing the hydraulic version) demonstrates learned locomotion behaviors that would have required years of hand-coded rules in the previous generation. The videos are the clearest public demonstration of the convergence between robotics and AI: the hardware is impressive, but the learned behaviors are what make it useful. Watch the terrain navigation sequences for the state of the art in physical AI.
- ↩︎
Figure AI. (2025). “Figure Raises Over $1 Billion in Series C.” Figure AI Press Release, September 2025. Prior: Series B of $675M at $2.6B valuation, February 2024. — The 15x valuation jump ($2.6B to $39B) in 18 months reflects investor enthusiasm for the “physical AI” thesis: a single AI system controlling a humanoid across many tasks. Whether the thesis is correct remains unproven — no humanoid robot has demonstrated commercially viable productivity in unstructured environments. The funding trajectory is a bet on a future capability, not a validation of current revenue. Total raised: approximately $1.9 billion.
- ↩︎
Weise, K. (2024). “The Robots Are Coming for the Warehouse Jobs. It’s Taking a While.” The New York Times, June 2024. — A corrective to the “automation is imminent” narrative. The piece documents how Amazon’s own robotics division has repeatedly pushed back its autonomy timelines — a pattern any investor in warehouse robotics should internalize. The gap between demo-ready and deployment-ready is measured not in software sprints but in years of field testing.
- ↩︎
International Federation of Robotics. (2025). World Robotics 2025: Industrial Robots. IFR Statistical Department. — The authoritative annual census of the global industrial robotics market. The $16.7 billion figure represents new installations only (not cumulative installed base). IFR data is the closest thing to ground truth for robotics market sizing and is the source most frequently cited in institutional investment research. If you cite one number in a robotics pitch deck, make it an IFR number.
- ↩︎
Amazon. (2025). Corporate filings and public presentations. — Amazon’s 750,000+ mobile robot fleet is the largest single robotics deployment in history by unit count. The fleet evolved from the 2012 Kiva Systems acquisition and now includes multiple robot types: pod-carrying mobile bases, sorting systems, and emerging manipulation platforms. The scale demonstrates that robotics deployment works when the environment is controlled and the task is constrained — and that even Amazon hasn’t solved the general manipulation problem.
- ↩︎
Wingfield, N. (2012). “Amazon to Buy Kiva Systems for $775 Million.” The New York Times, March 19, 2012. — The acquisition that launched the modern warehouse robotics era. Amazon paid $775 million for Kiva, then pulled the product from the market (Kiva had been selling to other retailers) and rebranded it as Amazon Robotics. The decision to vertically integrate robotics — rather than buy from a vendor — set the template that other logistics giants have since followed. The acquisition price looks like a bargain relative to the operational savings generated.
- ↩︎
Amazon. (2023). “Amazon Introduces Sparrow.” Amazon Science Blog, November 2022; updated deployments through 2023. — Sparrow is Amazon’s robotic manipulation system designed to handle individual items in fulfillment centers. It uses computer vision and machine learning to identify and grasp items of varying shapes and sizes — the specific perception-manipulation challenge that the Kiva robots were never designed to solve. As of early 2026, Sparrow handles a subset of Amazon’s inventory; the full range of warehouse SKUs remains beyond current capability, illustrating the “last inch” problem described in this module.
