DeepLearning Methods on iot:A Survey ofState-of-the-artABSTRACT:Deep learning is a machine learningtechnique that teaches computers to dowhat comes naturally to humans and help to make sense of datasuch as images, sound, and text.In thispaper, we provide aanoverview on a advanced machine learningtechniques,namely Deep Learning (DL), to facilitate the analyticsand learning in the IoTdomain that have incorporated DL in their intelligence background are alsodiscussed.
Thesemethods have dramatically improved the state-of-the-art in computer vision,speech recognition, natural language processing (NLP) and many other domainssuch as drug discovery and cancer cell detection. Wehave summarize major reportedresearch attempts that leveraged Deep learning inthe IoT domain. GeneralTerms:deeplearning Keywords:deeplearning, collaborative filtering, hybrid recommender,internet of things.Introduction:Thevision of the Internet of Things (IoT) is to transform traditional objects tobeing smart by exploiting a wide range of advanced technologies, from embeddeddevices and communication technologies to Internet protocols, data analytics,andso forth. In recent years, many IoT applications arose in differentverticaldomains.for example:health, transportation, smart home, smartcity, agriculture,education, etc. Deep Learning (DL) has been actively utilized in many IoTapplications in recent years.
- Thesis Statement
- Structure and Outline
- Voice and Grammar
- Conclusion
Applications:Automated Driving: Automotiveresearchers are using deep learning to automatically detect objects such asstop signs and traffic lights and even pedestrians to decrease accidents. Aerospace and Defense: Deep learning is used to identify objects from satellitesthat locate areas of interest, and identify safe or unsafe zones for troops.Medical Research: Cancerresearchers are using deep learning to automatically detect cancer cells. Industrial Automation: Deep learning saves the worker by automatically detectingwhen people or objects are within an unsafe distance of machines.
Electronics: Deeplearning is being used in automated hearing and speech translation. DEEPLEARNING APPROACHES FOR RECOMMENDER SYSTEM1)Deep Neural Networks (DNN)Deep neuralnetwork (DNN) is a multilayer percept network with many hidden layers, whoseweights are fully connected and are often initialized using stacked RBMs or DBN31, 32. The success of DNN is that it can accommodate a larger hidden units and performs a better parameter initializationmethods. A DNN with large number of hidden units can have better modelingpower.
1.1 Basic terminologies of deep learning 1)Deep belief network (DBN): The generative models composed of multiple layersof stochastic, hidden variables. The top two layers have undirected and symmetricconnections between them.
The lower layers receive top-down and directedconnections from the layer above .2)Boltzmann machine (BM): A network that are connented symmetricaly,neuron-like units that make stochastic decisions about whether to be on or off . 3)Restricted Boltzmannmachine (RBM): This is also a one ofthe special Boltzmann machine consisting of a layer of visible units and alayer of hidden units with no visible-visible or hidden-hidden connections.
4)Deep Boltzmann machine(DBM): This is also a one of the special BM where the hidden units areorganized in a deep layered manner, only adjacent layers are connected, andthere are no visible-visible or hidden-hidden connections within the same layer. 5)Deep neural network (DNN):A multilayer network with many hiddenlayers, whose weights are fully connected and are often initialized using stacked RBMs or DBN. 6)Deep auto-encoder: A DNN whose output targets the data input itself, often pre-trained withDBN or using distorted training data to regularize the learning. 7)Distributedrepresentation: The observed data in such a waythat they are modeled as being generated by the interactions of many hiddenfactors. A particular factor learned from the configurations can often generalize well. Distributed representations formthe basis of deep learning. 2 )NEURAL NETWORKS:Most deep learningmethods use neural network architectures.The term “deep”usually refers to the number of hidden layers in the neural network.
Traditional neural networks only contain 2-3 hidden layers, while deep networkscan have as many as 150 layers.2.2 ARTIFICIAL NEURAL NETWORK:The original goal ofthe ANN is to solve problems in the same way that a human brain would. ANNs have been used on a variety oftasks, including computer vision,speech recognition, machine translation, socialnetwork filtering, playing board and video games and medicaldiagnosis3) Convolutional NeuralNetwork (CNN) Convolutional Neural Network is a type of deep learning modelwith each module consisting of a convolutional layer and a pooling layer. Thesemodules are often stacked up with one on top of another, or with a DNN on topof it, to form a deep model.
It is very similar to ordinary neural networks. These are actually made ofneurons which consists of learnable weights and biases and where each neuron get some inputs ,performs a dot product operation of these inputs and conditionally follows itwith non-linearity.This isusually explained in the architecture of this model where each neuron whenreceiving inputs make it to transform through a series of hidden layers. Noweach hidden layer consists of neurons where each neuron is fully connected toall the previous neurons and these neurons in a single layer functionindependently and thus making them not to share connections with others. Thefinally connected layer is the “output layer” and it represents class scores inclassification system. a single fully-connected neuron in a first hidden layerof a regular Neural Network would have 32*32*3 = 3072 weightsThere arethree main parameters that control the output volume of the convolution layer. They are:1.
Depth2. Stride3. Zeropadding Themain advantage of convolution neural networks is the inputs are represented ina image format and this system is a more sensible way of neural networks. The applicationsof convolution neural networks are 1. Image recognition 2.
Video analysis 3.Checkers 4. Go 5. Fine-tuning Figure 1.1: Architecture of CNN4)Recurrent Neural Networks(RNNs): The input to an RNNconsistsof both the current sample and the previous observedsampleand the output of an RNN at time step t=1affectsthe output at time step t. Each neuron isequipped withafeedback loop that returns the current output as an input forthe next step.5)Long Short Term Memory (LSTM):LSTMis an extension of RNNs.It uses theconceptof gates for its units and it computes a value between0and 1 based on their input.
The feedback loopalsostore the information and each neuron in LSTM (also called amemorycell) has a multiplicative forget gate, read gate, andwritegate. These gates are introduced to control the accesstomemory cells and to prevent them from perturbation byirrelevantinputs. When the forget gate is active, the neuronwritesits data into itself.
When the forget gate is turned offbysending a 0, the neuron forgets its last content. When thewritegate is set to 1, other connected neurons can write to thatneuron.If the read gate is set to 1, the connected neurons canread the content of the neuron.6)Autoencoders (AEs):AEsconsist of an input layer and an output layer that areconnectedthrough one or more hidden layers. AEs have thesame number of input and output units.7) Variational Autoencoders (VAEs)8)GenerativeAdversarial Networks (GANs)9)Ladder Networks:Laddernetworks were proposed in 2015 by Valpola etal.
30tosupport unsupervised learning. This ladder network performs variety offunctions such as handwrittendigitsrecognition and image classification. The architecture of a ladder networkconsists of two encoders and one decoder.correlation properties of theobserved or visible data for pattern analysis or synthesis purposes.Fig.
2.1. Ladder network structurewith two layersFig.
3.1 Structure of a recurrent neural network10 )Architectures of Deep Learning 10.1 Generative deeparchitectures: which are intended tocharacterize the high-order correlation properties of the observed or visibledata for pattern analysis or synthesis purposes,10.
2 Discriminative deeparchitectures: which are intended todirectly provide discriminative power for pattern classification,10.3 Hybrid deeparchitectures:where the goal isdiscrimination but is assisted (often in a significant way) with the outcomesof generative architectures via better optimization or/and regularizationFig.4.1. Google Trend showing more attention toward deep learning in recentyears.
Fig. 5.1 .
The overall mechanism oftraining of a DL model.Islanding Detection MethodsThis section provides anoverview of various islanding detection methods. There are threemajor categories forislanding detection methods: passive resident methods, active residentmethods,and communication-based methods.Passive Resident MethodsPassive resident methodsare based on the detection of abnormalities in electrical signals atthePCC of a DG unit.
Active Resident MethodsAn active resident methodarti¯cially creates abnormalities in the PCC signals that can bedetectedsubsequent to an islanding event.Communication-Based MethodsCommunication-basedmethods are based on transmission of data between a DG unit andthe host utility system.The data is analyzed by the DG unit to determine if the operationofthe DG should be halted.Global strategiesDeep learning provides two main improvements over thetraditional machines.
They are:1.They simply reduce the need for hand crafted andengineered feature set to be used exclusively for training purpose.2.They increase the accuracy of the prediction model forlarger amounts of dataConclusion: