If you drive a modern car, you already know the feeling: a light flashes in the mirror, a chime sounds, and the car quietly tells you, “Don’t change lanes now.”
On a bike or motorcycle, that voice is usually missing. You have mirrors, instincts, and not much else.
That gap is exactly where a new wave of camera technology is starting to change the game. The big shift: instead of sending video to a huge computer somewhere in the vehicle, the intelligence is moving right into the camera itself.
In the two-wheeler world, that matters a lot.
Why “just add a GPU” doesn’t work on two wheels
Traditional ADAS systems in cars follow a simple recipe: put cameras around the body, push all that video to a central GPU box, run deep learning models there, send warnings back to the driver.
Cars can get away with that. They have space, power, cooling, and budgets.
Now imagine trying the same thing on:
- a city e-bike with a small battery,
- a lightweight scooter,
- or even a mid-size motorcycle with limited room under the seat.
A 150-watt GPU and a fist-sized ECU are non-starters. You don’t have the volume, the power budget, or the cost headroom.
And yet, the safety need is very real:
- Cars and vans sitting in your blind spot.
- Fast close passes from behind.
- Sudden braking or cut-ins ahead.
- Pedestrians stepping out into the lane.
To give riders some “digital awareness” without turning the bike into a rolling data center, the AI has to move as close to the camera as possible.
That’s where on-sensor AI and near-sensor AI come in.
On-sensor AI: when the image sensor runs the model
On-sensor AI is exactly what it sounds like: the neural network runs inside the image sensor package itself.
Sony’s IMX500 is the flagship example. From the outside it looks like just another high-end sensor. Under the hood, it combines:
- an image sensor,
- a signal processor,
- and a small neural network engine on the same piece of silicon.
Instead of streaming full-resolution video to a processor somewhere else, the sensor can:
- take in raw pixels,
- run an object detection model,
- and output what it saw, not just the pixels.
So instead of sending 60 frames per second of 1080p video, an IMX500-based camera can send something like:
- “Car detected on left at 7 metres.”
- “Vehicle closing fast from behind.”
- “Pedestrian in path ahead.”
For two-wheelers, that changes the design rules:
- You can put a smart camera on the rear rack or under the tail light.
- The camera itself can recognise vehicles and risky situations.
- A tiny microcontroller or light CPU only has to handle alerts and maybe talk to a display or an app.
No separate GPU, no heavy accelerator board, no fat video cables running through the frame.
It’s the difference between “camera + computer” and “camera that is also the computer.”
Near-sensor AI: a tiny brain next to a normal sensor
On-sensor AI is one way to push intelligence to the edge. Another is near-sensor AI.
Here, you keep the image sensor “normal,” but you add a small processor right next to it – often on the same board, sometimes even in the same package. That chip:
- receives raw frames from the sensor over a very short connection,
- does image processing and AI inference locally,
- and then feeds compact results to the rest of the system.
OmniVision’s OAX4600 is a good example of this pattern. It’s a small automotive SoC designed to sit beside a camera sensor and handle both image signal processing and AI tasks at the “far edge” of the system.
Paired with a standard RGB-IR sensor, an OAX4600-class chip turns a camera module into a smart node:
- the lens and sensor capture the scene,
- the near-sensor SoC cleans it up, runs the model, and figures out what’s important,
- only metadata or light video streams leave the module.
You end up with something that behaves a lot like an IMX500-style smart sensor, but with more flexibility:
- you can choose your favourite sensor vendor (Sony, OmniVision, SmartSens, etc.),
- you can scale the model size with the near-sensor chip,
- you can standardise the camera interface for different bikes and use cases.
Our own “FlyingChip” concept lives in that same category: a near-sensor edge-AI SoC that sits next to the camera, not in a big central box.
From one big brain to many small ones
Put these pieces together and the architecture for two-wheeler ARAS starts to look very different to car ADAS.
Instead of one big GPU in the middle of the vehicle, you get multiple small brains at the edge.
A typical future layout could look like this:
- A smart rear camera with an IMX500-class sensor watching for fast close passes and vehicles in the blind zone.
- A smart front module combining a high-dynamic-range sensor with a near-sensor AI chip, watching traffic ahead and reading the road situation.
- A lightweight IRAS ECU (Intelligent Rider Assistance ECU) in the middle that:
- fuses the alerts from front and rear,
- talks to the bike’s CAN bus or controller,
- drives simple icons on a dash or sends messages to a smartphone,
- logs “near miss” events for coaching, fleet safety or insurance.
The AI heavy lifting happens at the camera nodes. The central ECU becomes a coordinator instead of a supercomputer.
That brings a couple of big wins for bikes and motorcycles:
- You can add real ARAS behaviour without a car-style electronics rack.
- Power draw stays within what an e-bike or scooter battery can reasonably support.
- Packaging becomes realistic: small modules in lights, fairings, or mirrors instead of big boxes in non-existent engine bays.
- Costs stay in line with what the market will accept.
What this unlocks on real vehicles
Once the AI lives on or near the sensor, a lot of useful things suddenly become practical on two-wheelers:
- A rear safety light that doesn’t just blink, but actually knows when a car is approaching too fast and can change its behaviour.
- A bar-mounted head unit on an e-bike that quietly warns when you’re drifting into traffic or following too closely.
- A motorcycle cluster that shows simple, timely icons:
- “danger right,”
- “too close ahead,”
- “fast approach from rear left.”
- Fleet or city bikes that can log exactly where and when riders experience close passes, giving cities hard data on dangerous roads.
All of that without shipping a GPU and a wind tunnel worth of cooling on every frame.
Where Camemake fits in this picture
Camemake sits at the point where these ideas move from slides to hardware.
Our focus is the vision layer: cameras, optics, and the electrical path into whatever AI engine you choose.
In practical terms, that means:
- Camera modules built for real roads
High dynamic range, tuned ISP parameters, and robust optics so that an IMX500 or a near-sensor AI SoC is working from clean, usable images – not from glare, noise, and smear. - Mechanical designs that disappear into the vehicle
Small MIPI and USB modules, slim lens options, and mounting concepts that fit into headlights, tail lights, cockpits, and fairings without turning designers’ lives into hell. - Support for both on-sensor and near-sensor AI
We can help design and manufacture:- modules around on-sensor AI devices (like IMX500-class chips),
- and modules that pair a standard sensor with a near-sensor AI SoC (OAX4600/FlyingChip-style).
- A clear path to the IRAS ECU
Whether the “brain” is a tiny microcontroller, a compact SoC, or an OEM’s own ECU, we provide the right electrical interface and timing so that the smart camera nodes talk reliably to the rest of the bike.
The result is not “a camera,” but a drop-in building block for ARAS: a vision module that already knows how to live on a bike and how to feed whatever AI strategy the OEM chooses.
Why this matters now
We’re at an inflection point:
- AI is light enough to run on a sensor.
- Tiny SoCs can do what used to require a desktop GPU.
- Two-wheeler safety concepts are mature and clear.
- Riders and brands are asking for more than just brighter lights and better brakes.
On-sensor and near-sensor AI are the missing pieces that make serious vision-based assistance practical on small vehicles.
At Camemake, we see it very simply:
if a camera can see and think at the edge, then any bike or motorcycle can become more aware of its surroundings without growing a huge computer.
If you’re exploring ARAS for two-wheelers and want to talk about cameras that don’t just look, but also help you offload the GPU, that’s exactly what we build at camemake.eu.







