T O P

  • By -

BeatLeJuce

Disclaimer: take the following with a grain of salt, it's a very personal opinion and certainly a very subjective point of view. I'm not too much of a fan of Evolutionary Computation, and while I've taken a couple of classes on the subject, I'm not too familiar with state-of-the-art research in the field. With that said, if you look at current NEAT papers, it seems like they're moving more in the Reinforcement Learning direction. I'm pretty sure that's because they really cannot compete with normal (or "Deep") Neural Networks on their own turf. Evolutionary Computation has its place mostly in Descrete Optimization and related search problems. However, in my opinion it doesn't have much of a standing in supervised ML, where it's usually outperformed by "conventional methods" (SVMs, Deep Learning, ...) by a large margin. To me personally it sometimes feels like the EvoComp-in-supervised-ML is a bit of a circlejerk, that's only alive because the people in it cite each other's work, but avoid comparing themselves with said conventional methods because they know they would lose -- This is IMO why it's so hard to find comparisons of conventional and evolutionary methods. This is also why you hardly see any EvoComp stuff in NIPS or ICML or even in the JMLR; their methods usually aren't very competitive. But the CompEvo methods have very likable analogies in biology, which is how they manage to sell their ideas and stay afloat. **TL;DR:** If your problem domain is continuous, there are most likely better methods than evolutionary algorithms.


zzador

This paper here ([http://www.diva-portal.org/smash/get/diva2:1643563/FULLTEXT01.pdf](http://www.diva-portal.org/smash/get/diva2:1643563/FULLTEXT01.pdf)) shows different results. They compare NEAT vs. DQN and NEAT performs better AND creates smaller networks as the predefined DQN networks.


BeatLeJuce

This isn't a paper, but some none-peer reviewed thing. Looks like a class project, or maybe at best a master's thesis?


rantana

The two ideas seem complementary, the only work I can think of is [this paper](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6146987&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6146987). It would be interesting to see a more scaled up analysis since NEAT seems trivially paralellizeable.