7 Current Trends in Artificial Intelligence | GoFounders

Several newest trends in AI is picking up the pace in the modern world that we live in. We will discuss the latest AI trends here in this blog for your leisurely read.

1. Reinforcement Learning and its real-world applications

Reinforcement Learning is a relatively new field in AI which is mushrooming into the next big thing. Whenever an AI application is deployed in the real world, it could be keen on exploring its environment but needs to follow certain constraints for it to obey the limitations of that particular environment. Many research institutes have been forthcoming with applications such as Constrained Policy Optimization (CPO) where safety can be ensured while exploration.
These AI agents can also be provided with adequate training to give feedback. Again AI research institute McGlashan et. Al has proposed an algorithm called Convergent Actor-Critic by Humans (COACH) to learn from policy-dependent feedback for training robots with feedback provided non-technical users. COACH can learn various behaviors on a physical robot.

2. Deep Learning Optimization

Methods such as batch normalization, whitening neural networks (WNN) are used to regularize deep neural networks. But then, the computational overhead of building the covariance matrix and solving SLD plays a bottleneck to apply to whitening. With a new method called Generalized Whitening Neural Networks (GWNN), the limitations of WNN can be overcome by reducing computational overhead and compact representations.
An AI research institute had proposed a Winograd style faster computation for higher dimensions optimized for CPUs. This algorithm was benchmarked against popular frameworks life Caffe, Tensorflow that supports the AVX and Intel MKL optimized libraries. An interesting insight propped out of this that the current CPU limitations are largely due to software rather than hardware.
The increase in the number of features maps the redundancy increases leading to inefficient memory usage. The reduction of dimensionality of feature maps by preserving the intrinsic information and the reduction in the correlation between feature maps is being proposed through a method called RedCNN. Circulant matrix for projection that gives high training speed and mapping speed is used here.

3. Deep Learning Application

In the field of healthcare, sleep disorders can be diagnosed by identifying sleep patterns and abetter healthcare can be provided. Currently, this approach for identifying sleep patterns itself is cumbersome since a lot of sensors are attached to the body making the patient experience sleep difficulty, rendering the measurement unreliable. To overcome these challenges, a team from MIT researched using wireless radio frequency (RF) signals in identifying sleep patterns without sensors on the patient’s body.
The combination of CNN-RNN was used to identify patterns for sleep stage prediction. However, the RF signals produced a lot of unwanted noise and therefore they added adversarial training that would discard any extraneous information specific to any individual but retain the useful information required to predict the sleep stage. The team had achieved significantly better results (80 %) than the current method.

4. Meta-Learning

A model called Model Agnostic Meta-Learning (MAML) creates a model with parameters learned from random sampling over the distribution of tasks.  The model can be adapted to new tasks using some pieces of training and interactions also called few-shot learning. Here researchers demonstrated MAML’s application over the classification, regression, and reinforcement learning tasks.

5. Sequential Modeling

In many sequences like phrases in human language or group of letters in identifying phonotactic rules, the segmental structure follows a natural pattern. Facebook AI Research (FAIR) uses Convolutions for Sequence to Sequence Learning where they created hierarchical structures using multi-layer convolutions. In this way, they replicated the long-range dependencies captured in traditional LSTM based architectures part from using gated linear units, residual connections, and attention in every decoder layer. READ MORE

Comments

Popular posts from this blog

Benefits of AI

Leading And Team Building With GoFounders!

What is Digital Signature and How It Secures E-docs?