by Goodfellow, Bengio and Courville
I finally finished reading Deep Learning, by Ian Goodfellow, Yoshua Bengio and Aaron Courville. And as a wonderful as a book as it is, it wasn’t an easy read, at all.
Secrecy is the halt of humanity's progress
I’ve noticed a trend in the AI research field, and one I feel pretty proud to talk about.
Given the latest advancements, even if there is or there is not a plateau in research in these next years, means that there is definitely a technological advance to nations or corporations that have the first step into new revolutionary AI. I am not talking about the new Skynet (although some people suggest it), but rather who gets the head start that creates the new vicious cycle. Those where our regulations are not enough to keep up, and we need to rethink what our laws are regarding economy, or military, or ethics.
We’ve all head the concerns on military self-driving attack drones. We’ve all heard the concerns about life-or-death decision taking self-driving cars. We’ve all heard and watched the theories, movies and games that revamp the Turing Test and want us to reconsider how deep we are in the rabbit hole.
And Musk’s OpenAI platform was a first response to that, with a clear strategy: make advancements available for everyone. The real strategy behind OpenAI is that when the time comes for that tipping point, such information will be available to everyone, enhancing competition and leveling up risks of a company/nation taking advantage. Of course, you might agree that this is a very complex problem and claiming that this is a simple solution that works is really oversimplifying the situation.
But it is, without a doubt, a clever one.
Very recently, researches raised up their voice against opacity in research journals, which is another example of this tendency. I share the thought that research and technological advance should be benefit for all of humanity. Not only because it will prevent unfairness, but also will benefit us all. Is there any good reason to willfully keep other organizations/nations behind in technological advance? Ethically speaking, none.
I very much like this new tendency, and as we’ve seen non-profit software foundations rise successfully in the past few decades, this might be the right time to make knowledge a concern for all of humanity.
Curso de Ernesto Mislej
(Post only in Spanish since I’m reviewing Spanish content. It’s a Udemy course talking about Data Science in an agile framework.)
Hace unos días terminé de participar del curso Ciencia de datos ágil, por Ernesto Mislej, co-fundador de 7puentes. Al comienzo me recomendaron el curso porque cubría un hueco que no estaba muy bien explicado en las fuentes online: cómo llevar adelante un proyecto de Data Science, fuera del típico proceso waterfall que siempre se describe. Además, Ernesto es una fuente de buena reputación, por lo que el curso me interesó.
El curso es relativamente corto, pero es algo condensado. El formato del curso es primero plantear algunos aspectos teóricos sobre el marco metodológico agile y la ciencia de datos. Luego, con esas bases, comienza un ejemplo práctico como si estuviéramos siendo parte de un proyecto. Tras eso, Ernesto nos lleva por lo que serían nuestras deducciones, idas, vueltas y conclusiones. También nos explica qué reuniones se “dieron” con nuestros clientes y todo lo que ocurre en un proyecto agile común y corriente.
Estos ejemplos son realmente esclarecedores: permiten ver cómo se llevaría adelante un proyecto como estos. Pero como ejemplo, por supuesto que tiene algunos aspectos simplificados. No mencionan de entregables, o procesos de testing, o casos de poca definición de negocios que también es el tipo de cosas que pasan en proyectos normales. No culpo al curso por esto: era claramente un curso introductorio a esta temática, pero por eso aún quedan algunas preguntas pendientes para quienes ya hayan participado de proyectos agile.
En definitiva, es una buena inversión de 2 horas y 15 dólares. (¡Muy accesible!) Definitivamente lo recomendaría.
Si les interesa, pueden visitarlo en Udemy: Ciencia de datos ágil.
Guest post on Making Sense's blog
I just wrote a guest post at Making Sense’s blog: A Novice’s Introduction to Data Science. Hopefully the first in a series, but for the moment, feel free to check that one out to find out what Data Science is and why you all the hype about it lately.
by Rae Steinbach, in cooperation with Y Media Labs
This is another great guest post, this time from Rae Steinbach. She is a graduate of Tufts University with a combined International Relations and Chinese degree. After spending time living and working abroad in China, she returned to NYC to pursue her career and continue curating quality content. Rae is passionate about travel, food, and writing, of course.
Her post talks about the impact that Google’s AI vision will have on retail businesses. Thanks Rae; thank you very much!
During 2017’s Google I/O, where developers from all around the world explore emerging technologies together, the company announced several new elements for both Google Home and Google Assistant. For instance, Google Assistant, now equipped with AI, will be able to provide relevant information about your environment by “seeing” it through the phone’s camera. You could just point the lens at a business you pass on the street, suddenly receiving information about its services, customer reviews, and more.
How machine learning algorithms may take on our work
I recently came across the article Using Artificial Intelligence to Augment Human Intelligence, by Shan Carter and Michael Nielsen. I’d like to tell you a bit about the ideas that this essay mentions, and a few interpretations of my own about them.
Automatic timesheet entry
As you may know, part of the daily responsibilities of the software-workers is to log their time. (And many other professions too, I’m sure.) This implies reporting time with a certain level of detail so that our managers (sometimes, ourselves) can properly bill each customer for the work done.
The problem with this is that it is a pretty repetitive task, and not only that, each customer will have requirements of their own, like using their own system for time tracking, to separate the work in tickets, to receive a summary by email, etc.
I created Worklogger to be a swiss-army-knife solution to these variables.
But causation ⇒ correlation
In my earlier post I explained how certain type of machine learning models, specifically neural networks, find the correlations between two sets of values. For predictive models, we feed correlated variables to train our models. However, sometimes, we don’t know if or how variables correlate, and part of the machine learning intelligence is to actually find that out.
A simple explanation on the basis of neural network learning
This is a question I’ve been recently asked, and I think it’s interesting enough to share about. A few people asked me how is it that machines can learn, and specifically, how is it that neural networks can learn to understand data that may be really complex. The goal of this article is not to give an in-depth explanation, but rather one that can be easily understood.
Course contents and structure
A few weeks ago I graduated from Udacity’s Deep Learning Foundations Nanodegree program. It was an introduction to all things deep learning, from the very basics to state of the art techniques. Let me tell you a little bit about it.