Drawing on my core expertise in CS/Math/Neurobiology, I have been carrying out a long-term independent research project in Machine Learning, with a major biologically-inspired slant.
Compared with the currently-fashionable style of ML, part of my approach is much more in line with how biological systems work – for example
involving decentralization, asynchronous signals, chaotic systems, different timescales (akin to "metabotropic" changes in neurons), and evolutionary algorithms.
A mix of bottom-up and top-down approaches.
A search for the minimalistic essence of the computing capability of human neurons – minus all the extraneous elements of cell biology – also with
an interest in AGI (Artificial General Intelligence.)
The current state of the project is a Mixed Platform that can handle Dual Network types:
Of especial interest to me is ML to control/understand/fine-tune a dynamical system, such as the biological networks of my open-source project
Life123.
Under such circumstances, the Gain function is not analytically available – and it will typically include random components from chaotic effects and/or
interactions with unpredictable (simulated) environments. So, gradients can only be estimated from sample points in the search space.
Implementation: the dual platform is based on both my own independent research as well as on "traditional" ML/deep networks (gradient descent, Python, Tensorflow.) The code is at present not made public.
Technologies used: Originally prototyped in Mathematica, then developed in C#/.NET, with a WPF UI. Most recently ported to Python/TensorFlow/Flask, with a browser-based UI.