Multitouch.fi
category: general [glöplog]
The people I know from there are not (ex) sceners, but I don't know everybody, of course.
stopher: Thanks for the tip, but this is for a high-profile museum installation, and D.I.Y or something without a proper support-agreement isn't an option.. and yes, we looked at OneCom, but they refused to sell the device without their software (which is overpriced, and also: bad) so that was a no-go. I'm looking to maybe buying directly from Finland or skipping multitouch alltogether and just go for a series of the new Samsung SyncMaster 32" touch displays. Multitouch would be the coolest though, but you know.. budgets :/
But what's with the lag?
Hmm, to me that lag seems only due to filtering. Quite probably they're using a firewire camera, so the maximum lag should be around slightly more than 1/60 seconds = 16ms, if the camera is actually going at 60fps. just an hypotesis. I'll see if I can try them.
They list Citywall as a reference. I've tried that out, and it seems pretty responsive to me.
Gloom:
What about Microsoft Surface? It's a bit pricy at more than €10k each, but at least it looks like it has a decent SDK. http://www.microsoft.com/surface/
What about Microsoft Surface? It's a bit pricy at more than €10k each, but at least it looks like it has a decent SDK. http://www.microsoft.com/surface/
i like multitouch.fi
ms surface doesnt ship everywhere.
if your problem is pricing, diy: http://www.instructables.com/id/Interactive-Multitouch-Display/
ms surface doesnt ship everywhere.
if your problem is pricing, diy: http://www.instructables.com/id/Interactive-Multitouch-Display/
Sinar: we looked at Surface, but it is too expensive, does not have very flexible mounting options, plus: it's only 1366x768 while the Multitouch Cell is half the price and has 1920x1280.
But horrible lag, though.
psionice: i've taken part in a similar project which was with infrared lasers. our method was differentiating blinking patterns in sources. we used binary codes that would stand alone independent of the phase like:
10101010.. for source 1
110110110.. for source 2
11101110.. for source 3 etc.
now if their frequencies are low enough to be caught by the camera and high enough to be neglected by the eye, i suppose it will work. there is no lower limit however, when you work in the infrared range that the camera can sense.
10101010.. for source 1
110110110.. for source 2
11101110.. for source 3 etc.
now if their frequencies are low enough to be caught by the camera and high enough to be neglected by the eye, i suppose it will work. there is no lower limit however, when you work in the infrared range that the camera can sense.
Thanks for all the suggestions! Plenty to be thinking about.
The display really needs to be fast and lag free as possible, so blinking might not be ideal (I'm guessing that if the light source is missing some frames, it's going to effectively cut the tracking frame rate at least in half.. or is it? I think the camera will run at 60fps, so perhaps it's not so much of an issue)
Different IR frequencies sounds like a no-go, unless we can get some kind of more expensive camera somehow..
Having a shape made of multiple LEDS is possible, only thing that concerns me with that is occlusion. Being able to track the distance of a pen from the wall + the angle of it though, that's interesting. I could probably use that info for some cool stuff :)
Different intensities.. dunno. That would be ideal in a lot of ways, but it would depend on how accurately the different pens could be detected. Testing needed.
The display really needs to be fast and lag free as possible, so blinking might not be ideal (I'm guessing that if the light source is missing some frames, it's going to effectively cut the tracking frame rate at least in half.. or is it? I think the camera will run at 60fps, so perhaps it's not so much of an issue)
Different IR frequencies sounds like a no-go, unless we can get some kind of more expensive camera somehow..
Having a shape made of multiple LEDS is possible, only thing that concerns me with that is occlusion. Being able to track the distance of a pen from the wall + the angle of it though, that's interesting. I could probably use that info for some cool stuff :)
Different intensities.. dunno. That would be ideal in a lot of ways, but it would depend on how accurately the different pens could be detected. Testing needed.
Why not just use different colours?
Optimus: I was thinking the same.
Sure, the frame rate goes down (so fast movements could become an issue, unless you don't have an high-framerate camera, that is: price > 1k € ...).
Also: ideally a way is needed to generate blinking at the same rate of the camera's shutter...
A simpler Idea might be to put a super-cheap camera (eventually IR) on the top with maybe a fish-eye lens (or maybe two of them) and perform stoopid user horizontal position tracking... that might work out nicely (unless users don't stack...! :)
Sure, the frame rate goes down (so fast movements could become an issue, unless you don't have an high-framerate camera, that is: price > 1k € ...).
Also: ideally a way is needed to generate blinking at the same rate of the camera's shutter...
A simpler Idea might be to put a super-cheap camera (eventually IR) on the top with maybe a fish-eye lens (or maybe two of them) and perform stoopid user horizontal position tracking... that might work out nicely (unless users don't stack...! :)
Wait, I meant: Optanes :)
(btw: did you publish anything about that project you've mentioned? thx :)
(btw: did you publish anything about that project you've mentioned? thx :)
broderick: nope, we didn't publish anything, but the vids of the robots using those should be lying somewhere :)
Better than many other, but with some issues anyway (initial models at least seem to fail a bit too much - one a few days ago here and one failed right after installation at the client's place).
The SDK isn't that great, but some problems are inherent with the technique (choose black bg and biiig objects if you can).
broderick: IR camera and firewire 800 indeed.
The SDK isn't that great, but some problems are inherent with the technique (choose black bg and biiig objects if you can).
broderick: IR camera and firewire 800 indeed.
psonice:
What about not differencing pointers at all by hardware?
Suppose showing an image on the wall of N different numbered squares / areas, and asking the users to point inside their pen number for, let say, a second, until the pen is tracked.
Then, you can always do a prediction of the next frame position of the pointer. While the pointers are in the next frame in the predicted positions to a certain thresold and there is no overlapping conflict, they continue being tracked. If the tracking is lost, the pens without tracking are asked again to stay at the square areas for tracking again.
What about not differencing pointers at all by hardware?
Suppose showing an image on the wall of N different numbered squares / areas, and asking the users to point inside their pen number for, let say, a second, until the pen is tracked.
Then, you can always do a prediction of the next frame position of the pointer. While the pointers are in the next frame in the predicted positions to a certain thresold and there is no overlapping conflict, they continue being tracked. If the tracking is lost, the pens without tracking are asked again to stay at the square areas for tracking again.
makc: You're talking about Multitouch.fi now, right? When you say the SDK isn't great, could you be more specific?
doom: that's one idea I already mentioned. It would have to be visible colour though, and I'd have to detect both the visible colour + IR signal I think, or I'd be detecting colour from the screen. Well worth trying I think.
Texel: that maybe possible, but it's likely that 2 people might want to work together in the same square, which would screw things up a bit. Hmm.. gives me some ideas though.. I could use a combination of IR + coloured LEDs, and leave the IR light always on so it's possible to always track it. The colour LED would shine when the pen is active, allowing it to be identified. And a button on the pen could cause the IR light to flash, and tell the controller to bring up a menu or whatever. Many options :D
Texel: that maybe possible, but it's likely that 2 people might want to work together in the same square, which would screw things up a bit. Hmm.. gives me some ideas though.. I could use a combination of IR + coloured LEDs, and leave the IR light always on so it's possible to always track it. The colour LED would shine when the pen is active, allowing it to be identified. And a button on the pen could cause the IR light to flash, and tell the controller to bring up a menu or whatever. Many options :D
psonice. I mean:
[1] [2] [3] [4] [5]
[ ]
[ ]
[ work area ]
[ ]
[ ]
The top space squares are to point there your pen, wait for a second until it is tracked. While it is tracked, you can move in the work area. The software will predict positions, and while the pointers are not in conflict, so, the prediction is aceptable to a thresold, then you continue knowing that this is pen number Nth. If the track is lost, the user can be required to go again to the pen-detection square
[1] [2] [3] [4] [5]
[ ]
[ ]
[ work area ]
[ ]
[ ]
The top space squares are to point there your pen, wait for a second until it is tracked. While it is tracked, you can move in the work area. The software will predict positions, and while the pointers are not in conflict, so, the prediction is aceptable to a thresold, then you continue knowing that this is pen number Nth. If the track is lost, the user can be required to go again to the pen-detection square
ups... lets try again:
Code:
[1] [2] [3] [4] [5]
[ ]
[ ]
[ work area ]
[ ]
[ ]
Quote:
that's one idea I already mentioned. It would have to be visible colour though, and I'd have to detect both the visible colour + IR signal I think, or I'd be detecting colour from the screen.
Then:
1) Build a very rigid frame for the whole setup, as heavy as possible. Paint the whole interior matt black, save for the screen.
2) Use a regular visible-light camera and place a dimming filter over the camera lens. Make it strong enough that a white pixel on the screen is recorded as something like a 50% grey. Or experiment to find the right strength.
3) Calibrate the setup so that you know exactly what rectangle in the camera stream corresponds to the image from the projector.
4) Calibrate with various colours and intensities on screen so you know exactly how the camera interprets them with the filter in place.
5) After the calibrations you have enough information to subtract the projected image stream from the recorded image stream. The difference will be whatever light hits the front of the screen. Thanks to the rigid frame any extra noise (coming from movement of the camera relative to the projector, or rotation of the camera/projector relative to the screen) should be minimised, and there are various image processing options for compensating for vibrations/deformations you can't prevent.
6) In the final difference image, look for bright red hotspots, bright green hotspots, bright blue hotspots, etc. (from laser pointers, LEDs, whatever). Thanks to the dimming filter on the camera you won't lose information to overexposure (unless you're using say a laser pointer, which gives no shadow around the hotspot, and the screen is in direct sunlight).
Patent pending. ;P
Alternatively, use fucking Wiimotes. ;)
thanks texel + doom, both good ideas there :)
Texel: that set up has some things I really like. The top squares would lose me some work space, but I'll need somewhere to put some parameter bits that will be per-pen/user, so perhaps something like this is actually perfect.
Doom: also good ideas. Painting the interior black might not work out though, as this will likely happen in an art gallery somewhere. Most likely they'll have painted it white ;) Rigid isn't a problem though, the surface will probably be either a wall or a window (so it's visible from outside). A window would be cool, but could be a nightmare to get working.
The "use visible light cameras + subtract the image" thing is good. I think we'll likely have a few computers networked, and the processing to subtract the screen image for the cameras shouldn't be any problem at all.
Wiimotes won't work, the people will be using this as a touch surface (i.e. the wiimote would be pressed against the screen).
Right, plenty of options, time to discuss it with the guy that'll be doing the hardware :)
Texel: that set up has some things I really like. The top squares would lose me some work space, but I'll need somewhere to put some parameter bits that will be per-pen/user, so perhaps something like this is actually perfect.
Doom: also good ideas. Painting the interior black might not work out though, as this will likely happen in an art gallery somewhere. Most likely they'll have painted it white ;) Rigid isn't a problem though, the surface will probably be either a wall or a window (so it's visible from outside). A window would be cool, but could be a nightmare to get working.
The "use visible light cameras + subtract the image" thing is good. I think we'll likely have a few computers networked, and the processing to subtract the screen image for the cameras shouldn't be any problem at all.
Wiimotes won't work, the people will be using this as a touch surface (i.e. the wiimote would be pressed against the screen).
Right, plenty of options, time to discuss it with the guy that'll be doing the hardware :)
gloom: yes; about the sdk, the low level part (closed source) seems ok - image recognition mostly. The UI layer (code in the sdk) is good for hacking up some demo, but much less useful for a more complete application (think of some MS demos). You can replace any layer with your own anyway, starting by camera recognition, up to finger "reporting", ui layer and multimedia.
Overall the main problem is probably the little documentation available, and even if the developers are collaborative I think they're too busy with their business boom (deserved) :)
If you're considering using those, think of simple interactions with lots of eye candy or plan some time to understand the system and replace some parts so that they better suit your idea (say you want a "multi tap" event fe).
Overall the main problem is probably the little documentation available, and even if the developers are collaborative I think they're too busy with their business boom (deserved) :)
If you're considering using those, think of simple interactions with lots of eye candy or plan some time to understand the system and replace some parts so that they better suit your idea (say you want a "multi tap" event fe).