Showing posts with label ai. Show all posts
Showing posts with label ai. Show all posts

Sunday, 8 June 2025

Peeking inside the mind of a norn: what happens when we take parts away?

Norns are so complex, and they work (on the whole) so well that it is hard to see the function of particular parts without removing them.  

Socrates: who could not learn

An early success in removing norn functionality was Socrates, a norn  who cannot learn, but who must rely on her instincts and the player's guidance to function in the Albia of Creatures 2.  Socrates led to the breakthrough, around the time of OHSS, that the learning feature was broken, which led to the development of the Canny Norns and other genetic breeds for C2.

Instinctless: When smart norns get too smart

Conversely, the good behaviour of C3 norns has lead to some players yearning for the numbskulls of yore - leading to various experiments into instinctless creatures, such as the  No-instinct Norn by Slaterbait and instinctless norns by Amaikokonut.  What these experiments revealed is the symbiotic balance between instinct and learning. Remove instincts, and the Norn becomes a blank slate. Fascinating, maybe even rewarding to raise, but also desperately inefficient without human help - particularly when lifts are involved.

Dreams: the bridge between instinct and learning

Sleep isn't just downtime for Norns—it's an essential part of how they process and apply their instincts. According to the CAOS command DREA, dreaming is where the creature experiences the situation and the consequence while they sleep, and then another situation and consequence every five seconds, strengthening the neural network between the dream situation presented and actions held in instincts.  Norns also dream while they are waiting to hatch, so a norn who feels no need to sleep is not quite the same as a norn who has no instincts, but they should behave similarly: particularly for instincts which only switch on at adulthood.

The Combination Lobe: Where norns make up their minds

In the brain of a C3 Norn, real decision-making happens in the combination lobe. This part of the brain decides what action to take on which object—like whether to eat food or hit it. Each neuron in this lobe represents one specific action-object combination, and it fires based on a mix of inputs: the Norn’s current drives (like hunger or boredom), how close the object is, whether the object or action was recently mentioned, and even how it smells.

All these factors come together to help the creature choose the most relevant, goal-driven behaviour. It's not random—it’s a carefully weighted decision. The combination lobe is what makes a Norn’s actions feel intentional, responsive, and lifelike. The genome that came with the game constantly overwrote the combination lobe's information with new learning, which sometimes caused confusion. This was tweaked in the Creatures Full of Edits and genomes based on the CFE. Here, the new intel only adds to previously-learned information, which allows for a more average experience of food to dominate here: a weird invention that dispenses fatty goodies and pain should cause less confusion.

What We’ve Learned from What We’ve Lost

Removing parts of the Norn system—learning, instincts, sleep—is more than a curiosity. It’s been a way to understand the elegant, interconnected systems that make Norns feel alive.

  • Instinct without learning gives you rigid, predictable creatures.

  • Learning without instinct gives you naive, chaotic adventurers.

  • No sleep? You get Norns who could know better—but don’t.

And without a properly functioning combination lobe, even a well-fed, well-informed Norn struggles to act. It’s here that all other systems—drive, perception, memory—are brought together and turned into meaningful decisions. Break that link, and even perfect instincts and learning can't express themselves.

Each system plays a role, and each missing part makes us appreciate the whole. Like any real organism, Norns aren’t just the sum of their parts—they're the interaction of those parts.

So whether you're raising a genius or a lovable numbskull, remember: every Norn has its place in the digital Darwinian dance. And sometimes, breaking things is the best way to understand how and why they work.

Sunday, 31 July 2022

Researchers trained an AI model to ‘think’ like a baby, and it suddenly excelled


Shutterstock
Susan Hespos, Western Sydney University

In a world rife with opposing views, let’s draw attention to something we can all agree on: if I show you my pen, and then hide it behind my back, my pen still exists – even though you can’t see it anymore. We can all agree it still exists, and probably has the same shape and colour it did before it went behind my back. This is just common sense.

These common-sense laws of the physical world are universally understood by humans. Even two-month-old infants share this understanding. But scientists are still puzzled by some aspects of how we achieve this fundamental understanding. And we’ve yet to build a computer that can rival the common-sense abilities of a typically developing infant.

New research by Luis Piloto and colleagues at Princeton University – which I’m reviewing for an article in Nature Human Behaviour – takes a step towards filling this gap. The researchers created a deep-learning artificial intelligence (AI) system that acquired an understanding of some common-sense laws of the physical world.

The findings will help build better computer models that simulate the human mind, by approaching a task with the same assumptions as an infant.

Childish behaviour

Typically, AI models start with a blank slate and are trained on data with many different examples, from which the model constructs knowledge. But research on infants suggests this is not what babies do. Instead of building knowledge from scratch, infants start with some principled expectations about objects.

For instance, they expect if they attend to an object that is then hidden behind another object, the first object will continue to exist. This is a core assumption that starts them off in the right direction. Their knowledge then becomes more refined with time and experience.

The exciting finding by Piloto and colleagues is that a deep-learning AI system modelled on what babies do, outperforms a system that begins with a blank slate and tries to learn based on experience alone.

Cube slides and balls into walls

The researchers compared both approaches. In the blank-slate version, the AI model was given several visual animations of objects. In some examples, a cube would slide down a ramp. In others, a ball bounced into a wall.

The model detected patterns from the various animations, and was then tested on its ability to predict outcomes with new visual animations of objects. This performance was compared to a model that had “principled expectations” built in before it experienced any visual animations.

These principles were based on the expectations infants have about how objects behave and interact. For example, infants expect two objects should not pass through one another.

If you show an infant a magic trick where you violate this expectation, they can detect the magic. They reveal this knowledge by looking significantly longer at events with unexpected, or “magic” outcomes, compared to events where the outcomes are expected.

Infants also expect an object should not be able to just blink in and out of existence. They can detect when this expectation is violated as well.

A baby makes a comical 'shocked' face, with wide eyes and an open mouth.
Infants can detect when objects seem to defy the basic laws governing the physical world. Shutterstock

Piloto and colleagues found the deep-learning model that started with a blank slate did a good job, but the model based on object-centred coding inspired by infant cognition did significantly better.

The latter model could more accurately predict how an object would move, was more successful at applying the expectations to new animations, and learned from a smaller set of examples (for example, it managed this after the equivalent of 28 hours of video).

An innate understanding?

It’s clear learning through time and experience is important, but it isn’t the whole story. This research by Piloto and colleagues is contributing insight to the age-old question of what may be innate in humans, and what may be learned.

Beyond that, it’s defining new boundaries for what role perceptual data can play when it comes to artificial systems acquiring knowledge. And it also shows how studies on babies can contribute to building better AI systems that simulate the human mind.The Conversation

Susan Hespos, Psychology Department at Northwestern University Evanston, Illinois, USA and Professor of Infant Studies at MARCS Institute, Western Sydney University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tuesday, 11 August 2015

Artificial Intelligence should benefit society, not create threats

Artificial Intelligence should benefit society, not create threats

Toby Walsh, NICTA
Some of the biggest players in Artificial Intelligence (AI) have joined together calling for any research to focus on the benefits we can reap from AI “while avoiding potential pitfalls”. Research into AI continues to seek out new ways to develop technologies that can take on tasks currently performed by humans, but it’s not without criticisms and concerns.
I am not sure the famous British theoretical physicist Stephen Hawking does irony but it was somewhat ironic that he recently welcomed the arrival of the smarter predictive computer software that controls his speech by warning us that:
The development of full artificial intelligence could spell the end of the human race.
Of course, Hawking is not alone in this view. The serial entrepreneur and technologist Elon Musk also warned last year that:
[…] we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.
Both address an issue that taps into deep, psychological fears that have haunted mankind for centuries. What happens if our creations eventually cause our own downfall? This fear is expressed in stories like Mary Shelley’s Frankenstein.

An open letter for AI

In response to such concerns, an open letter has just been signed by top AI researchers in industry and academia (as well as by Hawking and Musk).
Signatures include those of the president of the Association for the Advancement of Artificial Intelligence, the founders of AI startups DeepMind and Vicarious, and well-known researchers at Google, Microsoft, Stanford and elsewhere.
In the interests of full disclosure, mine is also one of the early signatures on the list, which continues to attract more support by the day.
The open letter argues that there is now a broad consensus that AI research is progressing steadily and its impact on society is likely to increase.
For this reason, the letter concludes we need to start to research how to ensure that increasingly capable AI systems are robust (in their behaviours) and beneficial (to humans). For example, we need to work out how to build AI systems that result in greater prosperity within society, even for those put out of work.
The letter includes a link to a document outlining some interdisciplinary research priorities that should be tackled in advance of developing artificial intelligence. These include short-term priorities such as optimising the economic benefits and long-term priorities such as being able to verify the formal properties of AI systems.

The AI threat to society

Hollywood has provided many memorable visions of the threat AI might pose to society, from Arthur C. Clarke’s 2001: A Space Odyssey through Robocop and Terminator to recent movies such as Her and Transcendence, all of which paint a dystopian view of a future transformed by AI.






My opinion (and one many of my colleagues share) is that AI that might threaten our society’s future is likely still some way off.
AI researchers have been predicting it will take another 30 or 40 years now for the last 30 or 40 years. And if you ask most of them today, they (as I) will still say it is likely to take another 30 or 40 years.
Making computers behave intelligently is a tough scientific nut to crack. The human brain is the most complex system we know of by orders of magnitude. Replicating the sort of intelligence that humans display will likely require significant advances in AI.
The human brain does all its magic with just 20 watts of power. This is a remarkable piece of engineering.

Other risks to society

There are also more imminent dangers facing mankind such as climate change or the ongoing global financial crisis. These need immediate attention.
The Future of Humanity Institute at the University of Oxford has a long list of threats besides AI that threaten our society including:
  • nanotechnology
  • biotechnology
  • resource depletion
  • overpopulation.
This doesn’t mean that there are not aspects of AI that need attention in the near future.

The AI debate for the future

The Campaign to Stop Killer Robots is advancing the debate on whether we need to ban fully autonomous weapons.
I am organising a debate on this topic at the next annual conference of the Association for the Advancement of Artificial Intelligence later this month in Austin, Texas, in the US.
Steve Goose, director of Human Rights Watch’s Arms Division, will speak for a ban, while Ron Arkin, an American roboticist and robo-ethicist, will argue against it.
Another issue that requires more immediate attention is the impact that AI will have on the nature of work. How does society adapt to more automation and fewer people needed to work?
If we can get this right, we could remove much of the drudgery from our lives. If we get it wrong, the increasing inequalities documented by the French economist Thomas Piketty will only get worse.
We will discuss all these issues and more at the first International Workshop on AI and Ethics, also being held in the US within the AAAI Conference on Artificial Intelligence.
It’s important we start to have these debates now, not just to avoid the potential pitfalls, but to construct a future where AI improves the world for all of us.
The Conversation
Toby Walsh is Professor, Research Group Leader, Optimisation Research Group at NICTA.
This article was originally published on The Conversation. Read the original article.

Wednesday, 29 April 2015

On Strong AI and the Consciousness of Creatures

I think, therefore, I flib.


Over at Creatures Caves, Sparrow314 is gearing up to write an article for university, looking at if strong AI is valid, and if the DS Chichis could be said to be conscious.  It's building on the  Integrated Information Theory 3.0 developed by Giulio Tononi at the Center for Sleep and Consciousness at the University of Wisconsin-Madison, as well as earlier work by pioneers of AI. 

Even though it's at the early stages yet, there's still a lot of discussion going on about the nuts and bolts of how the (creatures) brain works, and the implications of consciousness in norns...  As well as discussion of several historical Creatures documents!  Don't forget to check it out!