Why the walls at Fennies are white

Why the walls at Fennies are white

Child Development Expert - Professor Sam Wass explains why the wall at Fennies are white and how this has a positive impact on the children's learning.

Many visitors are often struck, when they come to visit our nurseries, at how different the environment is from many traditional nurseries.

They’re expecting a setting in which every wall is covered with bright, colourful children's art, where you bump into things everywhere you move, and every surface covered with toys for them to play with. But what they find instead is open, airy settings with big, white walls, without too much crammed in, and with a few, carefully chosen toys made from natural materials.

Why is this? In this article, I lay out some of the neuroscience research into how young brains process the sights and sounds around them that has motivated our decision to design our nurseries in a more pared-down, minimalist way than most other nurseries.

Young children’s brains process the world around them very differently to how our grown-up adult brains do. It’s taken a while for us to work this out – because we’ve got no way of jumping into a child’s brain to see what the world feels like to them. But gradually, by measuring how children’s brains respond to different elements of their environments, we are starting to get a picture of just how different a child’s experience of the world is from a grown-up’s.

To understand how we need to understand how our brains sense the sights and sounds around us. As adults our brains are so practised at doing this that it doesn’t feel as if it’s hard work at all, and we even forget what we’re doing completely. But children’s brains haven’t had so much practice, so they find it much harder work.

We can see things around us because light particles from billions of photons are bouncing around off multiple surfaces and people in the room, and a small fraction of these happen to bounce through onto a small, light-sensitive surface at the back of our eyeballs called the retina. The retina is just a few millimetres across, and it’s completely flat. Somehow, our brains manage to convert this tiny, flat, squashed-up image into a 3-D life-sized model of our environment that allows us to, for example, move around without bumping into things.

Our ability to make sense of sound is equally, if not more amazing. Normally, in a busy nursery, there are countless different sources in the environment that are emitting sound – whether it’s someone at the next table talking, or a chair scraping on the floor. Each sound source creates a wave of air pressure, that bounces around, reflects off surfaces, and intermingles with pressure waves coming from other sound sources. Eventually, they reach our ears, where they make a little flap of skin called the eardrum vibrate. To make sense of what is happening – for example, when we are paying attention to the voice of someone who is talking to me - my brain has to separate out these intermingled pressure waves, which are all happening at the same time and often at the same pitch.

So how do our brains figure out how to do such amazingly complicated tasks?

Initially, neuroscientists tended to assume that processing sights and sounds is a fairly passive, step-by-step task: information reaches our sense organs and is gradually processed, step-by-step, to allow us to make sense of it. ‘Low-level’ visual processes take place first, close to where visual information first reaches our brain. These do basic things, like identifying bright and dark patches, and where the edges of objects are. Then, as we go ‘upwards’ in the brain, we get progressively higher-order, more abstract representations. For example, you start to see brain regions that respond to an image of a particular object whichever angle you’re viewing it from, and so on.

This is definitely still true – but, over the past ten to twenty years, we’ve also started to realise that it’s a bit more complicated than that. In fact, when we’re processing information, there is a constant conversation happening within our brains. As well as visual information getting processed upwards, step by step, there is also back-chat, with our ‘higher-order’ brain areas sending predictions back down to the ‘low-level’ areas about what they expect to see next. Our brains generate predictions moment by moment for what we’re expecting to see, and then check to see whether what we’re actually seeing matches those predictions.

It's because of this that, as adults, we’re much better at noticing things that we’re expecting to see than things that we’re not expecting. There are some famous examples of this, such as this video (I won’t tell you what it does – just watch it you’ll see!). Something can be right in front of my eyes, but if I’m trying to pay attention to something else, then I won’t be aware of it at all.

So what about children?

Well, in a child’s brain this process that I’ve described, of making predictions and then checking to see whether our observations correspond to the predictions, won’t work as well. The reason for this is quite simple, which is that predictions require knowledge. When I’m generating predictions I do this based on the knowledge and experience of things that I’ve seen and heard before. When I’m listening to language, for example, my brain is actually accessing my previous knowledge of language to predict what I will hear next. Because kids have less prior knowledge of language, they find it harder to use these predictions to ‘tease apart’ one speech stream in a situation where multiple streams of speech are reaching them.

So how does all this mean that children experience the world differently?

In short it means that processing complex, ‘noisy’ environments (whether it’s visual or auditory noise that we’re thinking of) is much, much harder for children compared with adults. For example, when they’re in an environment where there are multiple people talking at once, children find it much harder to ‘tune in’ to the voice of one person and to ‘shut out’ distractions.

We’ve studied this now in a few different ways. For example, we can measure how well our brain is perceiving speech because when we’re listening to speech the brain’s activity patterns actually track the volume fluctuations in speech. When we’re in an environment where there are multiple streams of speech at once, the brain’s activity patterns match up with the volume fluctuations in the speech stream that we’re trying to pay attention to more than those of the streams that we’re trying to ignore. And when we measure this ability in children, we can see that their brains can’t do it as well – their activity patterns are still affected by volume fluctuations in the speech streams that they’re trying to ignore. But we also know it from behavioural experiments as well – such as studies where you ask children to try to pay attention to one thing and ignore distractions, and then you measure afterwards how aware they were of the things that they were trying to ignore.

In short, childrens’ brains find it harder to make sense of complex noisy environments. And, in particular, they find it harder to ‘tune in’ to one thing that they are trying to pay attention to, and to tune out other things that they’re trying to ignore. This is equally true for how we see things – for example, where we are trying to see the teacher’s face against a background that is complex, and multi-coloured, with lots of edges and bright and dark patches behind it – as well as for hearing, where we are trying to pay attention to one voice in a noisy room.

It's because of this that any way we can find to simplify the sights and sounds that reach a young child’s eyes and ears, the better. Children learn language better when they hear just one person speaking against a background of silence. And for a baby, for example, it can be a struggle for their young brain to ‘find’ where the faces are within their field of view. This is so easy for our adult brains that we don’t even realise that it’s a challenge – but it is for babies. If they’re seeing faces against a plain white background then it’s easier for their brains to work out where the faces are than if they’re seeing faces against a complex, bright, colourful background.

So this is why the walls at Fennies nurseries are white! In designing our nurseries, as with all aspects of the service that we provide, we try hard to stay up to date with the latest discoveries and insights from psychology and neuroscience.

Subscribe to our newsletter

Stay up to date with Fennies news

By clicking Sign Up you're confirming that you agree with our Terms and Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Book a tour

If you'd like a personal tour, get in touch with the team...

Book a tour

Still have a question?

Speak to our friendly team.

CONTACT US