canmove, Confirmed users, Bureaucrats and Sysops emeriti
2,104
edits
(→Forever Life, 2020: adding image and sources) |
(→Big data and AI: Training AI: added image and sources) |
||
| Line 86: | Line 86: | ||
==Big data and AI: Training AI== | ==Big data and AI: Training AI== | ||
[[Image:Museum_fossilized_internet_training_AI_2291.jpg|500px]] | |||
''Homage to Trevor Paglen’s From ‘Apple’ to ‘Anomaly‘ exhibited at the [https://www.barbican.org.uk/our-story/press-room/trevor-paglen-from-apple-to-anomaly Barbican Art Gallery] in London in 2019.'' | |||
This framed picture of Paglen’s work features selections from the ImageNet dataset for object recognition. AI needed vast amounts of data in order to understand our world. In 2020 China had a massive system of data collection, which allowed the country to build and own the largest datasets used to train AI in the world. GTCOM, one of the leading AI companies at the time, was claimed to have harvested between 2 to 3 petabytes of data annually. Back in 2025, AI workload [https://www.technologyreview.com/s/614005/ai-computing-cloud-computing-microchips/ accounted for a tenth of the world’s electricity usage.] Training AI was generally quite energy intensive, for example, [https://www.technologyreview.com/s/613630/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/ training several popular and large AI models] produced the same CO2 as 34 passengers flying between Sydney and London. In the wake of the Econet Agreement, the field of Tiny AI emerged, where computer scientists competed to use the smallest training datasets possible along with the reemergence of principles contained with analog computing from 1960s. | |||
==Mining Rig, 2020== | ==Mining Rig, 2020== | ||