World After Capital

Updated 4 months ago

James Marcroft (@jamesmarcroft) started discussion #533

2 years ago · 8 comments


Finally, what about the arrival of the new humans. How will we treat them? The video of a robot being mistreated by Boston Dynamics is not a good start here. This is a difficult topic because it sounds so preposterous. Should machines have human rights? Well if the machines are humans then clearly yes. And my approach to what makes humans distinctly human would apply to artificial general intelligence. Does an artificial general intelligence have to be human in other ways as well in order to qualify? For instance, does it need to have emotions? I would argue no, because we vary widely in how we handle emotions, including conditions such as psychopathy. Since these new humans will likely share very little, if any, of our biological hardware, there is no reason to expect that their emotions should be similar to ours (or that they should have a need for emotions altogether).

Urgency (Edit this file)

Are you talking about this video? If so, that's to test the robot's ability to maintain and regain balance in difficult environments (a difficult RL and robotics problem). Testing and training for adversarial and unexpected environments is paramount to creating safe and reliable robots. That's not mistreatment, it's simply testing the mechanics of the algorithm to safely complete a task with interference. Please don't anthropomorphize robotics and machine learning more than it needs to be.

No description provided.
James Marcroft @jamesmarcroft commented 2 years ago

Also... While it's good to be thinking ahead, let's look at what machine learning and even RL like AlphaGo really is. It's an algorithm run on a gpu/tpu with an objective function hard coded by humans such as minimize the loss between the label and the predicted output, or get the highest score possible and win the game. Humans make the decisions of what the algorithm optimizes for. It does not have consciousness (consciousness defined as the ability to choose one's own goals).

James Marcroft @jamesmarcroft commented 2 years ago

With neural networks, convolution neural networks, capsule networks, recurrent neural networks, LSTMs, and all the other architecture's we're exploring, they are tools that fit a function to an input/output mapping via an optimization algorithm. We have a lot of tools we use that accomplish various tasks from our eyes to our cars. Regardless, they are tools and do not have consciousness. They perform a function when activated, that's it.

James Marcroft @jamesmarcroft commented 2 years ago

To say that we are anywhere near creating algorithms with consciousness is simply false. Even auto-ml is just a tool to create and optimize more tools, in this case the tools just have fancy names such as "neural networks" which actually don't look or behave anything like our brains do. Speaking of our brains, we don't even know how our own brains and consciousness work, so how could we create machines that are "conscious" and therefore "human"? It simply isn't a realistic line of thinking.

James Marcroft @jamesmarcroft commented 2 years ago

For more concrete resources to better understand the nature of neural networks I recommend Michael Nielson's book which shows how they are simply function approximators: His magic paper idea is also an easy 1 stop shop to tangibly see how that works:

James Marcroft @jamesmarcroft commented 2 years ago

For reinforcement learning I recommend Richard Sutton's book: (he is one of the main pioneers of RL working on it for 30-40 years) as well as David Silver's youtube series that explains some of the main algorithms (David Silver is one of the main architects of AlphaGo and AlphaZero at DeepMind):

James Marcroft @jamesmarcroft commented 2 years ago

To really drive this home, here's an example of a convolutional neural network for image recognition implemented in Google Sheets. Multiplication, addition, and a little calculus. and the spreadsheet

James Marcroft @jamesmarcroft commented 2 years ago

To REALLY drive this point home, Boston Dynamics has actually named their latest video "Testing Robustness". They are testing it's ability to safely complete a task in a complex, unpredictable, and difficult environment. Imagine instead of a person with a stick, a burning building where things collapse and push doors closed or if the robot gets caught on something that pulls it backwards.

to join this conversation on GitBook. Already have an account? Sign in to comment

You’re not receiving notifications from this thread.

1 participant