![]() DeepMind declined to talk to me for this story, and DeepMind has yet to release a forthcoming peer-reviewed paper explaining exactly how AlphaStar works. I'll cop to not fully understanding what all of that means. More specifically, the neural network architecture applies a transformer torso to the units, combined with a deep LSTM core, an auto-regressive policy head with a pointer network, and a centralized value baseline." AlphaStar was trained using "up to 200 years" of virtual gameplayĭeepMind writes that "AlphaStar’s behavior is generated by a deep neural network that receives input data from the raw game interface (a list of units and their properties) and outputs a sequence of instructions that constitute an action within the game. But it wasn't quite as big of an accomplishment as it might appear at first glance because it wasn't an entirely fair fight. AlphaStar won a five-game series against Wünsch 5-0, then beat Komincz 5-0, too.ĪlphaStar may be the strongest StarCraft AI ever created. The company pitted its AI, dubbed AlphaStar, against two top StarCraft players-Dario "TLO" Wünsch and Grzegorz "MaNa" Komincz. Last Thursday, DeepMind announced a significant breakthrough. DeepMind says that prior to its own effort, no one had come close to designing a StarCraft AI as good as the best human players. StarCraft is particularly challenging for an AI because players must carry out long-term plans over several minutes of gameplay, tweaking them on the fly in the face of enemy counterattacks. StarCraft requires players to gather resources, build dozens of military units, and use them to try to destroy their opponents. ![]() Specifically, DeepMind decided to write an AI to play the realtime strategy game StarCraft II. So what do you do after mastering one of the world's most challenging board games? You tackle a complex video game. DeepMind, the AI startup Google acquired in 2014, is probably best known for creating the first AI to beat a world champion at Go.
0 Comments
Leave a Reply. |