2
edits
(Added idea 4 (High performance distributed research computing)) |
|||
| Line 56: | Line 56: | ||
<br /><br />@Yosun I was trying to tell the story of an inspiring, immersive geographic interface, knowing that it's one implementation of something that could be much bigger. If you simply wanted to make this interface leverage the gig, you'd have location based videoconferencing. I step on a point that has an active user, *blip*, I open a live channel with that user. Done. Hopefully, this is an idea that I hope has hooks, an idea that is extensible. If you had an immersive location based tool like this, what would you make it do? | <br /><br />@Yosun I was trying to tell the story of an inspiring, immersive geographic interface, knowing that it's one implementation of something that could be much bigger. If you simply wanted to make this interface leverage the gig, you'd have location based videoconferencing. I step on a point that has an active user, *blip*, I open a live channel with that user. Done. Hopefully, this is an idea that I hope has hooks, an idea that is extensible. If you had an immersive location based tool like this, what would you make it do? | ||
</p> | </p> | ||
===Team Idea 4: High performance distributed research computing (for science, business, etc)=== | |||
WHO: Propsed by Roger Pincombe (OkGoDoIt), but very open for suggestions and discussion<br /><br /> | |||
WHAT: Projects like Folding@home (http://folding.stanford.edu), Seti@Home (http://setiathome.ssl.berkeley.edu/) and LHC@Home (http://lhcathome.web.cern.ch/LHCathome/) enable researchers to harness spare computing power to do insane amounts of distributed number crunching. There are even commercial efforts like CPUsage (http://cpusage.com/) and Plura Processing (http://www.pluraprocessing.com/). One of issues with massively distributed computing is that the network overhead limits the types of tasks that can be effectively distributed: tasks that are easily broken into separate chunks that can be worked on independently. The software clients download a data set, process it, and then upload results.<br/><br /> | |||
With the power of gigabit internet, massively distributed computing could be applied to a much wider set of scenarios. Perhaps systems that require constant communication among the workers or where data sets cannot be broken into reasonably small sizes for computation. I'm no expert in distributed computing and I don't have a specific idea yet, but this is an area I think will be hugely empowered by the rise of superfast internet. I encourage us all to discuss possible scenarios where this could be applied, or even methods to allow arbitrary computation (like less expensive AWS EC2 spot instances for data processing) without compromising the security of the end user. Maybe even client software that can run on smartphones (at night, when charging and connected to home wifi). Millions of surprisingly powerful smartphones are idle for a large portion of the night. With "big data" being the buzzword that it currently is, I imagine there is a lot of potential here.<br/><br /> | |||
I can add some specific ideas here soon, but I wanted to get the discussion started and see what you all think.<br /><br /> | |||
NEEDS: This is less of an idea and more of a starting point for idea discussion. It would be great if people more familiar with distributed computing and "big data" can add their thoughts.<br /><br /> | |||
DISCUSSION: (your thoughts here...) | |||
edits