What does AI code look like?

Mikkel Duif

Mikkel Duif, ai enthusiast.

Mise à jour il y a 44w

It doesn’t look too exciting! But what does it look like when you execute it?

One has to remember that AI is mostly just a bunch of mathematics (especially when talking machine learning and deep learning). Coding just makes it possible for the computer to execute it, and learn from the data. I think it is not as much about what it looks like and how to write the code, but more understanding what is actually going on in the code!

How can we create something that will recognise these handwritten digits from the MNIST database?

What does AI code look like?

Look at this. Just some Python code, but with a lot of mathematical definitions.1

What does AI code look like?

Excited?

I even have a hard time understanding it myself.

So what does it do? A lot of math!

Each handwritten digit consists of 28x28 pixels, which gives a total of 784 pixels for each digit. Every pixel is used as an input in something called a neural network, which we use to train the network. Every pixel is considered to be an input, and goes through a hidden layer, which in the below image is set to 15 neurons. From there, we get the estimate of which digit it most likely is to be. I.e. we look at which digit in the output layer has the highest activation. To visualise it, look like this:2

What does AI code look like?

So what happens when we execute the code? The type of network we have set up is called a feedforward neural network, which means that all the data goes in one direction. We use a back propagation algorithm to calculate the error we have on the current data, we change the settings and run through the neural network once again.

We get this output:

>>> import mnist_loader>>> training_data, validation_data, test_data = mnist_loader.load_data_wrapper()>>> import network>>> net = network.Network([784, 30, 10])>>> net.SGD(training_data, 10, 10, 0.1, test_data=test_data)Epoch 0: 5105 / 10000Epoch 1: 5887 / 10000Epoch 2: 7147 / 10000Epoch 3: 7566 / 10000Epoch 4: 7763 / 10000Epoch 5: 7869 / 10000Epoch 6: 7948 / 10000Epoch 7: 8019 / 10000Epoch 8: 8073 / 10000Epoch 9: 8111 / 10000

We see that in the very first round, our neural network was capable of correctly classifying 51% of all digits, and we trained it to correctly classify 81% of all handwritten digits. Quite a good improvement, right? Randomly guessing the digits will give you an accuracy of 10%! The accuracy can be improved even more, it just requires a lot of computational power. Moreover, changes to the code can also optimise the learning speed. You will probably be able to get an error-rate of less than 4% with a normal computer, while the best results are with error-rate of less than 1%3

So what did we do here?

>>> net = network.Network([784, 30, 10])

In this line of code we tell the network that there are:

  1. 784 input neurons
  2. 30 hidden layers
  3. 10 output neurons

In this line of code we use a method called stochastic gradient descent to train our network, and ‘decrease our costs’:

>>> net.SGD(training_data, 10, 10, 0.1, test_data=test_data)

  1. We go through 10 epochs, i.e. how many times we run through the network and improve it
  2. We want to have a mini-batch size of 10
  3. Our learning rate is 0.1

This is probably where it becomes a bit harder to grasp. But just imagine the mini-batch size as the sample size we use from our network, while the learning rate is how fast we progress with our learning. So why don’t we set our learning rate to be very high? It is likely that our network will never get better, as we take too huge steps. Imagine we want to take steps as to minimise the position of this ball:

What does AI code look like?

If we take too big a step each time, we will just end on the other side of the area, and never move down to the minimum (I’ve explained this more in depth here: What is the meaning of the learning rate in neural networks?).

So how can you play around with AI code yourself?

  1. Go to GitHub and download the above code: mnielsen/neural-networks-and-deep-learning
  2. Start reading about AI from this free online book: Neural networks and deep learning
  3. Open your terminal, and type in the code from above, and you will see your network starts to learn! Try different settings of the epochs, mini-batch size and learning rate.

This answer is to a great extent based on these two sources, and I highly recommend to start learning from it. Both of the links can be read and downloaded for free. He does accept donations of an arbitrary amount, but you are free to decide yourself (I do not know the author, but highly appreciate his material, and hope you will so).

Remember that, this is just one area of the vast field of AI. A lot of approaches exists trying to recognise these digits. I have just used one! Moreover, as you probably know already, AI is not just about recognising handwritten digits, but can be applied in a lot of other areas as well!

Leggi:  Comment acheter et vendre des crypto-monnaies aux Indiens d'Arabie saoudite qui sont là pour le travail

Other learning resources:

  1. Machine Learning by Andrew Ng: Machine Learning | Coursera

It definitely looks tough, but with basic math and programming skills (or willingness to learn them) you and your neural network will be learning quickly!

Notes de bas de page

1 mnielsen/neural-networks-and-deep-learning

2 Neural networks and deep learning

3 MNIST handwritten digit database, Yann LeCun, Corinna Cortes and Chris Burges

Sriraman Madhavan

Sriraman Madhavan, Stanford Statistics | Facebook Engineer

Répondu il y a 63w · Voté par

Ashutosh Kakadiya, M.S. Artificial Intelligence & Computer Science, Indian Institute of Technology, Madras (2021) and

Uday Santosh Kumar Thathapudi, M.C.A Computer Applications & Computer Programming, Visakhapatnam, Andhra Pradesh, India (2009) · Author has 258 answers and 7.2m answer views

This one tries to predict which Indian state you’re from, based on your name. Given that there are several character-level patterns in Indian names which may identify the person’s home state, I was surprised that this hasn’t been done before (at least publicly). I’m still working on it, but here’s a snippet:

size = len(names)train_X = np.array(names[:size * 2/3])train_y = np.array(indStates[:size * 2/3])test_X = np.array(names[size * 2/3:])test_y = np.array(indStates[size * 2/3:])X = tf.placeholder(tf.float32, [None, max_sequence_length, num_input])y = tf.placeholder(tf.float32, [None, num_classes])weights = weight_variable([num_hidden, num_classes])biases = bias_variable([num_classes])rnn_cell = tf.nn.rnn_cell.BasicRNNCell(num_hidden)out, states = tf.nn.dynamic_rnn(rnn_cell, X, dtype = tf.float32)y_ = tf.matmul(outputs[:,-1,:], weights) + biasesloss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = y_, labels = y))train_step = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(loss)


Here’s one that isn’t mine.

This one is Google’s image captioning model. A snippet and a few examples of généré captions are below, but the whole codebase is available ici .

What does AI code look like?

What does AI code look like?

Mais attendez! In case you were aussi impressed, here are some epic fails by the same model (as shown in https://arxiv.org/pdf/1609.06647...):

What does AI code look like?

Artificial Intelligence will probably end up being as stupid as the rest of us.

Saurabh Pandey

Saurabh Pandey, metadata is the key to AI, statistics Nazi , Quantum qubits addict

Répondu il y a 64w · L'auteur dispose de réponses 165 et de vues de réponses 196.8k

As said by others AI (weak AI/machine learing /NLP etc) code will disappoint you for sure . Hollywood movies make them appear cool and very complicated but honestly its just few lines which basically is used to perform some mathematical operation and train the classifier . Here the game is all about hyperparameters , data type , shape , dimensions etc . Although implementation of libraries like keras , tensor flow are quite interesting but generally ML scripts are not fascinating enough .
Exemple
1. Multiclass text classifier

import numpy as npimport tensorflow as tfclass TextCNNRNN(object):def __init__(self, embedding_mat, non_static, hidden_unit, sequence_length, max_pool_size,num_classes, embedding_size, filter_sizes, num_filters, l2_reg_lambda=0.0):self.input_x = tf.placeholder(tf.int32, [None, sequence_length], name='input_x')self.input_y = tf.placeholder(tf.float32, [None, num_classes], name='input_y')self.dropout_keep_prob = tf.placeholder(tf.float32, name='dropout_keep_prob')self.batch_size = tf.placeholder(tf.int32, [])self.pad = tf.placeholder(tf.float32, [None, 1, embedding_size, 1], name='pad')self.real_len = tf.placeholder(tf.int32, [None], name='real_len')l2_loss = tf.constant(0.0)with tf.device('/cpu:0'), tf.name_scope('embedding'):if not non_static:W = tf.constant(embedding_mat, name='W')else:W = tf.Variable(embedding_mat, name='W')self.embedded_chars = tf.nn.embedding_lookup(W, self.input_x)emb = tf.expand_dims(self.embedded_chars, -1)pooled_concat = []reduced = np.int32(np.ceil((sequence_length) * 1.0 / max_pool_size))for i, filter_size in enumerate(filter_sizes):with tf.name_scope('conv-maxpool-%s' % filter_size):# Zero paddings so that the convolution output have dimension batch x sequence_length x emb_size x channelnum_prio = (filter_size-1) // 2num_post = (filter_size-1) - num_priopad_prio = tf.concat([self.pad] * num_prio,1)pad_post = tf.concat([self.pad] * num_post,1)emb_pad = tf.concat([pad_prio, emb, pad_post],1)filter_shape = [filter_size, embedding_size, 1, num_filters]W = tf.Variable(tf.truncated_normal(filter_shape, stddev=0.1), name='W')b = tf.Variable(tf.constant(0.1, shape=[num_filters]), name='b')conv = tf.nn.conv2d(emb_pad, W, strides=[1, 1, 1, 1], padding='VALID', name='conv')h = tf.nn.relu(tf.nn.bias_add(conv, b), name='relu')# Maxpooling over the outputspooled = tf.nn.max_pool(h, ksize=[1, max_pool_size, 1, 1], strides=[1, max_pool_size, 1, 1], padding='SAME', name='pool')pooled = tf.reshape(pooled, [-1, reduced, num_filters])pooled_concat.append(pooled)pooled_concat = tf.concat(pooled_concat,2)pooled_concat = tf.nn.dropout(pooled_concat, self.dropout_keep_prob)# lstm_cell = tf.nn.rnn_cell.LSTMCell(num_units=hidden_unit)#lstm_cell = tf.nn.rnn_cell.GRUCell(num_units=hidden_unit)lstm_cell = tf.contrib.rnn.GRUCell(num_units=hidden_unit)#lstm_cell = tf.nn.rnn_cell.DropoutWrapper(lstm_cell, output_keep_prob=self.dropout_keep_prob)lstm_cell = tf.contrib.rnn.DropoutWrapper(lstm_cell, output_keep_prob=self.dropout_keep_prob)self._initial_state = lstm_cell.zero_state(self.batch_size, tf.float32)#inputs = [tf.squeeze(input_, [1]) for input_ in tf.split(1, reduced, pooled_concat)]inputs = [tf.squeeze(input_, [1]) for input_ in tf.split(pooled_concat,num_or_size_splits=int(reduced),axis=1)]#outputs, state = tf.nn.rnn(lstm_cell, inputs, initial_state=self._initial_state, sequence_length=self.real_len)outputs, state = tf.contrib.rnn.static_rnn(lstm_cell, inputs, initial_state=self._initial_state, sequence_length=self.real_len)# Collect the appropriate last words into variable output (dimension = batch x embedding_size)output = outputs[0]with tf.variable_scope('Output'):tf.get_variable_scope().reuse_variables()one = tf.ones([1, hidden_unit], tf.float32)for i in range(1,len(outputs)):ind = self.real_len < (i+1)ind = tf.to_float(ind)ind = tf.expand_dims(ind, -1)mat = tf.matmul(ind, one)output = tf.add(tf.multiply(output, mat),tf.multiply(outputs[i], 1.0 - mat))with tf.name_scope('output'):self.W = tf.Variable(tf.truncated_normal([hidden_unit, num_classes], stddev=0.1), name='W')b = tf.Variable(tf.constant(0.1, shape=[num_classes]), name='b')l2_loss += tf.nn.l2_loss(W)l2_loss += tf.nn.l2_loss(b)self.scores = tf.nn.xw_plus_b(output, self.W, b, name='scores')self.predictions = tf.argmax(self.scores, 1, name='predictions')with tf.name_scope('loss'):losses = tf.nn.softmax_cross_entropy_with_logits(labels = self.input_y, logits = self.scores) # only named arguments acceptedself.loss = tf.reduce_mean(losses) + l2_reg_lambda * l2_losswith tf.name_scope('accuracy'):correct_predictions = tf.equal(self.predictions, tf.argmax(self.input_y, 1))self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, "float"), name='accuracy')with tf.name_scope('num_correct'):correct = tf.equal(self.predictions, tf.argmax(self.input_y, 1))self.num_correct = tf.reduce_sum(tf.cast(correct, 'float'))

2. Sentiment analysis

import requestsimport randomimport nltkfrom nltk.corpus import stopwords,wordnetimport reenglish_stops = set(stopwords.words('english'))english_stops.add('.')english_stops.add('also')english_stops.add('get')def feature_extractor(document):features = {}ls =[]for document_words in document:for i,word in enumerate(document_words):if ((word.lower() not in english_stopsand word.isalpha())or word.lower() =='not'):syn= wordnet.synsets(word.lower())v='no'n='no'r='no'a='no'identity='unknown'if len(syn) is 0:identity='unknown'continueelse:identity='known'word = word.lower()features['contains(%s) with identity %s'% (word.lower(),identity)] = Truereturn featuresdef pre_train_processor(sent):rd=re.sub(r'(^@| @)[^ ]+',r"",sent)rd=re.sub(r'(^http| http)[^ ]+',r"",rd)rd=re.sub(r'(\.)*\1',r".",rd)rep=re.findall(r'(\w).*\1.*\1', rd)for ch in rep:rd=re.sub(r''+ch+ch+ch+r'+',r''+ch+ch,rd)sentences = nltk.sent_tokenize(rd)sentences = [nltk.word_tokenize(sent) for sent in sentences]return sentencesdef train_classifier():l = ['1006589563512903396', '1008324413029484000', '1040575996755346578', '1045137713186360887', '7022552177697192106','674420826054740334', '698451933787252187','414302334487557591','8944057146243258823','6946341970593481260','1431312376514000040','7864797704321962005','7070434591968454556']app_id = '8bf9db83'app_key = '428d906f5a33883c5066ddcfb5704f39'final_resp = []featuresets = []url ="http://developer.goibibo.com/api/voyager/get_hotels_by_cityid/?app_id=8bf9db83&app_key=428d906f5a33883c5066ddcfb5704f39&city_id=6771549831164675055"res = requests.get(url)result = res.json()print len(result['data'].keys())count = 0for key in result['data'].keys():count +=1print countvid = result['data'][key]['hotel_geo_node']['_id']#vid = data.get('vid','7022552177697192106')limit = "50"url = "http://ugc.goibibo.com/api/HotelReviews/forWeb?app_id=" + app_id + "&app_key=" + app_key + "&vid=" + vid+"&limit=" + limit + "&offset=0"res = requests.get(url)reslt = res.json()try:for review in reslt:if review.get('reviewContent') and len(review['reviewContent'].strip()) > 10:final_resp.append({'rating': review['totalRating'], 'text': review['reviewContent']})except Exception,e:print str(e)for x in final_resp:data = feature_extractor(pre_train_processor(x['text']))if x['rating'] > 3:tag = 'good'elif x['rating'] == 3:tag = 'moderate'else:tag = 'bad'featuresets.append((data, tag))print "lenght of data set: " + str(len(featuresets))feat_len = int(0.8*len(featuresets))train_set, test_set = featuresets[:feat_len], featuresets[feat_len:]classifier = nltk.NaiveBayesClassifier.train(train_set)print nltk.classify.accuracy(classifier, test_set)return classifierdef check_pos_hotels(classifier, data):sentences = pre_train_processor(data)feat = feature_extractor(sentences)res={}res['features']=featres['tag']=classifier.classify(feat)return res

Pasting the code is not going as per it should be so pardon me for that and you for answers in this field you can follow me on quora ( totally optional :P)

dis-moi si tu as besoin d'aide
Kudos !!:)

Leggi:  Comment retirer de l'argent de mon compte PayPal alors que je n'ai pas de carte de crédit ou de compte bancaire à Dubaï

Mohit Mishra

Mohit Mishra, I am also a Programmer

Mise à jour il y a 20w

I’m using Python 3 to develop the AI. Don’t worry, all the final code is on my GitHub Repo so you can easily copy-paste the code.

Nowadays, it’s important to have a catchy name for your AI. Apple started with Siri, Amazon came up with Alexa, so I’m gonna call mine Sirlexa. Go ahead and think of a fancy name for your AI.

Next, open up your code editor. I’m using Sublime text 3. Make a new python file called “http://sirlexa.py” on your desktop. If you’re using macOS, here are the terminal commands:

cd Desktoptouch sirlexa.py

Ouvert http://sirlexa.py with your code editor and write the following:

What does AI code look like?

In the first line, we are importing the standard random module that we’re going to need later. Réponses is a list with 3 different sentences, in case Sirlexadoesn’t understand, one of those sentences will be printed out in the console.

What does AI code look like?

The most complicated chunk of code in http://sirlexa.py

Now follows the main part. It’s a infinite while loop. We store the input which the user types in the console in user_input, then we check if user_input equals ‘ hi ’, if not, Sirlexa prints randomly one of the sentences from our list réponses. Au lieu de user_input.lower() we could also use .upper(), but then Sirlexa gets angry at you for yelling.

That’s it, you just created your own AI

Before leaving this Read something about AI :-

Artificial Intelligence is a very broad field and it covers many and very deep areas of computer science, mathematics, hardware design and even biology and psychology. As for the math: I think calculus, statistics and optimization are the most important topics, but learning as much math as you can won't hurt.

There are many good free introductory resources about AI for beginners. I highly recommend to start with this one:http://aiplaybook.a16z.com/ They also published two videos about the general concepts of AI, you can find them on Vimeo: "AI, Deep Learning, and Machine Learning: A Primer" and "The Promise of AI"

Once you have a clear understanding of the basic AI terms and approaches, you have to figure out what your goals are. What kind of AI software do you want to develop? What industries are you interested in? What are your chances to get involved in projects of big companies? It's easier to pick up the right tools when you know exactly what you want to achieve.

For most newcomers to AI the most interesting area is Deep Learning. Just to make it clear, there are many areas of AI outside of Machine Learning and there are many areas of Machine Learning outside of Deep Learning. (Artificial Intelligence > Machine Learning > Deep Learning) Most of recent developments and hyped news are about DL.

If you got interested in Deep Learning too, you have to start with learning about the concepts of artificial neural networks. Fortunately it's not too difficult to understand the basics and there are lots of tutorials, code examples and free learning resources on the web and there are many open-source frameworks to start experimenting with.

The most popular such Deep Learning framework is TensorFlow. It's backed by Google. Love it or hate it, it's a Python based framework. There are many other Python based frameworks, as well. Scikit-learn, Theano, Keras are frequently mentioned in tutorials too. (A tip: if you use Windows you can download WinPython that includes all of these frameworks.)

As for about Java frameworks, unfortunately there are not so many options. The most prominent Java framework for DL is Deeplearning4j. It's developed by a small company and its user base is much smaller then the crowd around TensorFlow. There are fewer projects and tutorials for this framework. However, industry specialists say Java based frameworks eventually integrate better with Java based Big Data solutions and they may provide a higher level of portability and easier product deployment. Just a sidenote: NASA's Jet Propulsion Laboratory used Deeplearning4j for many projects.

If you decide to go with the flow and want to start learning more about TensorFlow, I recommend you to check out the YouTube channels of "DeepLearning.TV", "sentdex" and "Siraj Raval". They have nice tutorials and some cool demos. And if you decide to take a deeper dive, you can sign up for an online course at udacity or coursera.

It also may be interesting to you to know that there are other Deep Learning frameworks for the Java Virtual Machine with alternative languages, for example Clojure. ( Clojure is a dialect of LISP and it was invented by John McCarthy, the same computer scientist who coined the term "artificial intelligence". In other words there are more modern and popular programming languages and tools, but it's still possible /and kinda cool/ to use the language for AI that was originally designed for AI. ThinkTopic in Boulder and Freiheit in Hamburg are two companies that use Clojure for AI projects. And if you want to see something awesome to get inspiration to use Clojure in AI and robotics, I recommend you to check out the YouTube video "OSCON 2013: Carin Meier, The Joy of Flying Robots with Clojure". (Mentioning Clojure in this answer was just an example to show you there is life outside of the bubble of Python-based AI frameworks.)

Leggi:  Quels sont les principaux plugins 5 WordPress pour le commerce électronique?

(+++ Anybody feel free to correct me if I said anything wrong. +++)

Nipun Ramakrishnan

Nipun Ramakrishnan, Undergraduate Research Assistant at Berkeley Artificial Intelligence Research (2017-present)

Répondu il y a 64w · L'auteur dispose de réponses 280 et de vues de réponses 2.6m

Code for a majority of artificial intelligence applications is actually not anything particularly spectacular. It’s similar in many aspects to a lot of implementations of standard algorithms.

In the search and intelligent agent space, here is a general implementation of graph search algorithms (BFS, DFS, A*, etc) in pseudocode1 .

What does AI code look like?

A good portion of machine learning code is a matter of handling datasets, cleaning data, and then using models defined in libraries such as scikit-learn.

Here’s an example of some basic machine learning model testing code in Python:2

What does AI code look like?

The interesting stuff begins when you actually start building your own models or actually implementing known models in either the machine learning or deep learning world. This is when you actually get into the mathematics of what’s actually going on in the model. These implementations generally get involved with linear algebra, probability, and optimization ideas.

Here’s an example of some of the code that goes into the implementation of Principal Components Analysis — an important dimensionality reduction algorithm3 :

What does AI code look like?

In the deep learning world, most models these days are built using Tensorflow. Here is an example of a Tensorflow implementation of the popular Convolutional neural network, which is often used in the computer vision realm of artificial intelligence4.

What does AI code look like?

So the story here is, most AI code is nothing special and looks like standard algorithm code that incorporates data. When it comes to developing something new and building models from scratch, however, it gets significantly more technical and mathematical than most other code you see in the software world.

Notes de bas de page

1 https://s3-us-west-2.amazonaws.c...

2 Your First Machine Learning Project in Python Step-By-Step - Machine Learning Mastery

3 Implementing a Principal Component Analysis (PCA)

4 A Guide to TF Layers: Building a Convolutional Neural Network | TensorFlow

Raghavendra Devarasetty

Raghavendra Devarasetty

Répondu il y a 9w · L'auteur dispose de réponses 254 et de vues de réponses 49.7k

AI as it is correct currently generally comprises of a cluster of complex direct variable based math and measurements. However, due DL libraries, for example, sklearn,keras, tensorflow, tflearn, theano and so forth., accessible at the present time, a large portion of the essential maths is preoccupied far from and can be communicated in an exceptionally natural shape. A complex neural system model can be spoken to in only a couple of lines, with all the grimy work dealt with. I would say (I am simply entering the field) more lines of code go into stacking the information, designing, and encouraging it into the model as opposed to the genuine model itself.

Obviously, in the event that you are on the front line of the field, and need to accomplish something which has never been done, it will be harder to execute your model. At that point, your code will look extremely confounded and obscure. Be that as it may, the network gets up to speed genuine fast, so no stresses.

For More Details Visit Here www.qualitythought.in

Lascia un commento

Il tuo indirizzo email non sarà pubblicato. I campi obbligatori sono contrassegnati *

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.