New top story on Hacker News: Ask HN: Reinforcement learning for single, lower end graphic cards?

Ask HN: Reinforcement learning for single, lower end graphic cards?
14 by DrNuke | 3 comments on Hacker News.
On one side, more and more hardware is being thrown in parallel to ingest and compute astonishing amounts of data generated by realistic 3d simulators, especially for robotics, with big names like OpenAI now just giving up on the field as from https://ift.tt/36EEXMi ; on the other side, more recent simulators like Brax from Google https://ift.tt/3xHRGJS are aiming at “matching the performance of a large compute cluster with just a single TPU or GPU”. Where do we stand on the latter side of the equation then? What is the state of the art with single, lower end GPUs like my 2016 gaming laptop’s GTX 1070 8GB? What do we lower end users need to read, learn and test these days? Thanks.

No comments:

Post a Comment