From other sources
We are gathering up articles from here and there because they seem to illuminate some aspect of this work.
Cognition - from Lumen
From other sources
Skip to main content Lumen Introduction to Psychology Thinking and Intelligence Search for: What Is Cognition?
Distinguish between concepts and prototypes
Explain the difference between natural and artificial concepts
Upon waking each morning, you begin thinking—contemplating the tasks that you must complete that day. In what order should you run your errands? Should you go to the bank, the cleaners, or the grocery store first? Can you get these things done before you head to class or will they need to wait until school is done? These thoughts are one example of cognition at work. Exceptionally complex, cognition is an essential feature of human consciousness, yet not all aspects of cognition are consciously experienced. Cognitive psychology is the field of psychology dedicated to examining how people think. It attempts to explain how and why we think the way we do by studying the interactions among human thinking, emotion, creativity, language, and problem solving, in addition to other cognitive processes. Cognitive psychologists strive to determine and measure different types of intelligence, why some people are better at problem solving than others, and how emotional intelligence affects success in the workplace, among countless other topics. They also sometimes focus on how we organize thoughts and information gathered from our environments into meaningful categories of thought, which will be discussed later.
Categories and Concepts
A category is a set of objects that can be treated as equivalent in some way. For example, consider the following categories: trucks, wireless devices, weddings, psychopaths, and trout. Although the objects in a given category are different from one another, they have many commonalities. When you know something is a truck, you know quite a bit about it. Remember, the psychology of categories concerns how people learn and use informative categories such as trucks or psychopaths. The mental representations we form of categories are called concepts. There is a category of trucks in the world, and you also have a concept of trucks in your head. We assume that people’s concepts correspond more or less closely to the actual category, but it can be useful to distinguish the two, as when someone’s concept is not really correct.
Concepts and Prototypes
The human nervous system is capable of handling endless streams of information. The senses serve as the interface between the mind and the external environment, receiving stimuli and translating it into nerve impulses that are transmitted to the brain. The brain then processes this information and uses the relevant pieces to create thoughts, which can then be expressed through language or stored in memory for future use. To make this process more complex, the brain does not gather information from external environments only. When thoughts are formed, the mind also pulls information from emotions and memories (Figure 1). Emotion and memory are powerful influences on both our thoughts and behaviors.
The outline of a human head is shown. There is a box containing “Information, sensations” in front of the head. An arrow from this box points to another box containing “Emotions, memories” located where the front of the person's brain would be. An arrow from this second box points to a third box containing “Thoughts” located where the back of the person's brain would be. There are two arrows coming from “Thoughts.” One arrow points back to the second box, “Emotions, memories,” and the other arrow points to a fourth box, “Behavior.” Figure 1. Sensations and information are received by our brains, filtered through emotions and memories, and processed to become thoughts.
In order to organize this staggering amount of information, the mind has developed a “file cabinet” of sorts in the mind. The different files stored in the file cabinet are called concepts. Concepts are categories or groupings of linguistic information, images, ideas, or memories, such as life experiences. Concepts are, in many ways, big ideas that are generated by observing details, and categorizing and combining these details into cognitive structures. You use concepts to see the relationships among the different elements of your experiences and to keep the information in your mind organized and accessible.
Concepts are informed by our semantic memory (you will learn more about this concept when you study memory) and are present in every aspect of our lives; however, one of the easiest places to notice concepts is inside a classroom, where they are discussed explicitly. When you study United States history, for example, you learn about more than just individual events that have happened in America’s past. You absorb a large quantity of information by listening to and participating in discussions, examining maps, and reading first-hand accounts of people’s lives. Your brain analyzes these details and develops an overall understanding of American history. In the process, your brain gathers details that inform and refine your understanding of related concepts like democracy, power, and freedom.
Concepts can be complex and abstract, like justice, or more concrete, like types of birds. In psychology, for example, Piaget’s stages of development are abstract concepts. Some concepts, like tolerance, are agreed upon by many people because they have been used in various ways over many years. Other concepts, like the characteristics of your ideal friend or your family’s birthday traditions, are personal and individualized. In this way, concepts touch every aspect of our lives, from our many daily routines to the guiding principles behind the way governments function.
Concepts are at the core of intelligent behavior. We expect people to be able to know what to do in new situations and when confronting new objects. If you go into a new classroom and see chairs, a blackboard, a projector, and a screen, you know what these things are and how they will be used. You’ll sit on one of the chairs and expect the instructor to write on the blackboard or project something onto the screen. You do this even if you have never seen any of these particular objects before, because you have concepts of classrooms, chairs, projectors, and so forth, that tell you what they are and what you’re supposed to do with them. Furthermore, if someone tells you a new fact about the projector—for example, that it has a halogen bulb—you are likely to extend this fact to other projectors you encounter. In short, concepts allow you to extend what you have learned about a limited number of objects to a potentially infinite set of entities.
Another technique used by your brain to organize information is the identification of prototypes for the concepts you have developed. A prototype is the best example or representation of a concept. For example, what comes to your mind when you think of a dog? Most likely your early experiences with dogs will shape what you imagine. If your first pet was a Golden Retriever, there is a good chance that this would be your prototype for the category of dogs.
LINK TO LEARNING Test how well you can match the computer’s prototype for certain objects by playing this interactive game, Quick Draw!
Natural and Artificial Concepts
In psychology, concepts can be divided into two categories, natural and artificial. Natural concepts are created “naturally” through your experiences and can be developed from either direct or indirect experiences. For example, if you live in Essex Junction, Vermont, you have probably had a lot of direct experience with snow. You’ve watched it fall from the sky, you’ve seen lightly falling snow that barely covers the windshield of your car, and you’ve shoveled out 18 inches of fluffy white snow as you’ve thought, “This is perfect for skiing.” You’ve thrown snowballs at your best friend and gone sledding down the steepest hill in town. In short, you know snow. You know what it looks like, smells like, tastes like, and feels like. If, however, you’ve lived your whole life on the island of Saint Vincent in the Caribbean, you may never have actually seen snow, much less tasted, smelled, or touched it. You know snow from the indirect experience of seeing pictures of falling snow—or from watching films that feature snow as part of the setting. Either way, snow is a natural concept because you can construct an understanding of it through direct observations, experiences with snow, or indirect knowledge (such as from films or books) (Figure 3).
Photograph A shows a snow covered landscape with the sun shining over it. Photograph B shows a sphere shaped object perched atop the corner of a cube shaped object. There is also a triangular object shown. Figure 3. (a) Our concept of snow is an example of a natural concept—one that we understand through direct observation and experience. (b) In contrast, artificial concepts are ones that we know by a specific set of characteristics that they always exhibit, such as what defines different basic shapes. (credit a: modification of work by Maarten Takens; credit b: modification of work by “Shayan (USA)”/Flickr)
An artificial concept, on the other hand, is a concept that is defined by a specific set of characteristics. Various properties of geometric shapes, like squares and triangles, serve as useful examples of artificial concepts. A triangle always has three angles and three sides. A square always has four equal sides and four right angles. Mathematical formulas, like the equation for area (length × width) are artificial concepts defined by specific sets of characteristics that are always the same. Artificial concepts can enhance the understanding of a topic by building on one another. For example, before learning the concept of “area of a square” (and the formula to find it), you must understand what a square is. Once the concept of “area of a square” is understood, an understanding of area for other geometric shapes can be built upon the original understanding of area. The use of artificial concepts to define an idea is crucial to communicating with others and engaging in complex thought. According to Goldstone and Kersten (2003), concepts act as building blocks and can be connected in countless combinations to create complex thoughts.
A schema is a mental construct consisting of a cluster or collection of related concepts (Bartlett, 1932). There are many different types of schemata, and they all have one thing in common: schemata are a method of organizing information that allows the brain to work more efficiently. When a schema is activated, the brain makes immediate assumptions about the person or object being observed.
There are several types of schemata. A role schema makes assumptions about how individuals in certain roles will behave (Callero, 1994). For example, imagine you meet someone who introduces himself as a firefighter. When this happens, your brain automatically activates the “firefighter schema” and begins making assumptions that this person is brave, selfless, and community-oriented. Despite not knowing this person, already you have unknowingly made judgments about him. Schemata also help you fill in gaps in the information you receive from the world around you. While schemata allow for more efficient information processing, there can be problems with schemata, regardless of whether they are accurate: Perhaps this particular firefighter is not brave, he just works as a firefighter to pay the bills while studying to become a children’s librarian.
An event schema, also known as a cognitive script, is a set of behaviors that can feel like a routine. Think about what you do when you walk into an elevator (Figure 4). First, the doors open and you wait to let exiting passengers leave the elevator car. Then, you step into the elevator and turn around to face the doors, looking for the correct button to push. You never face the back of the elevator, do you? And when you’re riding in a crowded elevator and you can’t face the front, it feels uncomfortable, doesn’t it? Interestingly, event schemata can vary widely among different cultures and countries. For example, while it is quite common for people to greet one another with a handshake in the United States, in Tibet, you greet someone by sticking your tongue out at them, and in Belize, you bump fists (Cairns Regional Council, n.d.)
A crowded elevator is shown. There are many people standing close to one another. Figure 4. What event schema do you perform when riding in an elevator? (credit: “Gideon”/Flickr)
Because event schemata are automatic, they can be difficult to change. Imagine that you are driving home from work or school. This event schema involves getting in the car, shutting the door, and buckling your seatbelt before putting the key in the ignition. You might perform this script two or three times each day. As you drive home, you hear your phone’s ring tone. Typically, the event schema that occurs when you hear your phone ringing involves locating the phone and answering it or responding to your latest text message. So without thinking, you reach for your phone, which could be in your pocket, in your bag, or on the passenger seat of the car. This powerful event schema is informed by your pattern of behavior and the pleasurable stimulation that a phone call or text message gives your brain. Because it is a schema, it is extremely challenging for us to stop reaching for the phone, even though we know that we endanger our own lives and the lives of others while we do it (Neyfakh, 2013) (Figure 5).
A person’s right hand is holding a cellular phone. The person is in the driver’s seat of an automobile while on the road. Figure 5. Texting while driving is dangerous, but it is a difficult event schema for some people to resist.
Remember the elevator? It feels almost impossible to walk in and not face the door. Our powerful event schema dictates our behavior in the elevator, and it is no different with our phones. Current research suggests that it is the habit, or event schema, of checking our phones in many different situations that makes refraining from checking them while driving especially difficult (Bayer & Campbell, 2012). Because texting and driving has become a dangerous epidemic in recent years, psychologists are looking at ways to help people interrupt the “phone schema” while driving. Event schemata like these are the reason why many habits are difficult to break once they have been acquired. As we continue to examine thinking, keep in mind how powerful the forces of concepts and schemata are to our understanding of the world.
Watch this CrashCourse video to see more examples of concepts and prototypes. You’ll also get a preview on other key topics in cognition, including problem-solving strategies like algorithms and heuristics.
You can view the transcript for “Cognition – How Your Mind Can Amaze and Betray You: Crash Course Psychology #15” here (opens in new window).
THINK IT OVER
Think about a natural concept that you know fully but that would be difficult for someone else to understand. Why it would be difficult to explain?
artificial concept: concept that is defined by a very specific set of characteristics cognition: thinking, including perception, learning, problem solving, judgment, and memory cognitive psychology: field of psychology dedicated to studying every aspect of how people think concept: category or grouping of linguistic information, objects, ideas, or life experiences cognitive script: set of behaviors that are performed the same way each time; also referred to as an event schema event schema: set of behaviors that are performed the same way each time; also referred to as a cognitive script natural concept: mental groupings that are created “naturally” through your experiences prototype: best representation of a concept role schema: set of expectations that define the behaviors of a person occupying a particular role schema: (plural = schemata) mental construct consisting of a cluster or collection of related concepts Contribute! Did you have an idea for improving this content? We’d love your input. Improve this pageLearn More
LICENSES AND ATTRIBUTIONS PreviousNext Footer Logo Lumen Waymaker
We are exploring the consequences and significance of grounding most mathematics in "axiomatic" definitions and systems.
We note that Goedel's Incompleteness Theorem is a statement about axiomatic systems, that might not be true if the fundamental definitions from which mathematics is constructed were defined in some other way.
The initial claim we explore is
- Axioms are inherently local fragments
- They are defined as linear sequences
- They are defined in terms of abstractions, where the understanding and agreement about what those symbols mean can be uncertain
When we say that "a = b"
- Ship of Theseus
- Can't step into the same river twice
An axiom, postulate or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Greek axí?ma 'that which is thought worthy or fit' or 'that which commends itself as evident.'
The term has subtle differences in definition when used in the context of different fields of study. As defined in classic philosophy, an axiom is a statement that is so evident or well-established, that it is accepted without controversy or question. As used in modern logic, an axiom is a premise or starting point for reasoning.
As used in mathematics, the term axiom is used in two related but distinguishable senses: "logical axioms" and "non-logical axioms". Logical axioms are usually statements that are taken to be true within the system of logic they define and are often shown in symbolic form (e.g., (A and B) implies A), while non-logical axioms (e.g., a + b = b + a) are actually substantive assertions about the elements of the domain of a specific mathematical theory (such as arithmetic).
When used in the latter sense, "axiom", "postulate", and "assumption" may be used interchangeably. In most cases, a non-logical axiom is simply a formal logical expression used in deduction to build a mathematical theory, and might or might not be self-evident in nature (e.g., parallel postulate in Euclidean geometry). To axiomatize a system of knowledge is to show that its claims can be derived from a small, well-understood set of sentences (the axioms), and there may be multiple ways to axiomatize a given mathematical domain.
Any axiom is a statement that serves as a starting point from which other statements are logically derived. Whether it is meaningful (and, if so, what it means) for an axiom to be "true" is a subject of debate in the philosophy of mathematics.
Mental Models: How to train your brain to think in new ways
Mental Models: How to Train Your Brain to Think in New Ways Learning a new mental model gives you a new way to see the world, make decisions, and solve problems. James Clear
Read when you’ve got time to spare.
You can train your brain to think better. One of the best ways to do this is to expand the set of mental models you use to think. Let me explain what I mean by sharing a story about a world-class thinker.
I first discovered what a mental model was and how useful the right one could be while I was reading a story about Richard Feynman, the famous physicist. Feynman received his undergraduate degree from MIT and his Ph.D. from Princeton. During that time, he developed a reputation for waltzing into the math department and solving problems that the brilliant Ph.D. students couldn’t solve.
When people asked how he did it, Feynman claimed that his secret weapon was not his intelligence, but rather a strategy he learned in high school. According to Feynman, his high school physics teacher asked him to stay after class one day and gave him a challenge.
“Feynman,” the teacher said, “you talk too much and you make too much noise. I know why. You’re bored. So I’m going to give you a book. You go up there in the back, in the corner, and study this book, and when you know everything that’s in this book, you can talk again.” 1
So each day, Feynman would hide in the back of the classroom and study the book—Advanced Calculus by Woods—while the rest of the class continued with their regular lessons. And it was while studying this old calculus textbook that Feynman began to develop his own set of mental models.
“That book showed how to differentiate parameters under the integral sign,” Feynman wrote. “It turns out that’s not taught very much in the universities; they don’t emphasize it. But I caught on how to use that method, and I used that one damn tool again and again. So because I was self-taught using that book, I had peculiar methods of doing integrals.”
“The result was, when the guys at MIT or Princeton had trouble doing a certain integral, it was because they couldn’t do it with the standard methods they had learned in school. If it was a contour integration, they would have found it; if it was a simple series expansion, they would have found it. Then I come along and try differentiating under the integral sign, and often it worked. So I got a great reputation for doing integrals, only because my box of tools was different from everybody else’s, and they had tried all their tools on it before giving the problem to me.” 2
Every Ph.D. student at Princeton and MIT is brilliant. What separated Feynman from his peers wasn't necessarily raw intelligence. It was the way he saw the problem. He had a broader set of mental models. What is a Mental Model?
A mental model is an explanation of how something works. It is a concept, framework, or worldview that you carry around in your mind to help you interpret the world and understand the relationship between things. Mental models are deeply held beliefs about how the world works.
For example, supply and demand is a mental model that helps you understand how the economy works. Game theory is a mental model that helps you understand how relationships and trust work. Entropy is a mental model that helps you understand how disorder and decay work.
Mental models guide your perception and behavior. They are the thinking tools that you use to understand life, make decisions, and solve problems. Learning a new mental model gives you a new way to see the world—like Richard Feynman learning a new math technique.
Mental models are imperfect, but useful. There is no single mental model from physics or engineering, for example, that provides a flawless explanation of the entire universe, but the best mental models from those disciplines have allowed us to build bridges and roads, develop new technologies, and even travel to outer space. As historian Yuval Noah Harari puts it, “Scientists generally agree that no theory is 100 percent correct. Thus, the real test of knowledge is not truth, but utility.”
The best mental models are the ideas with the most utility. They are broadly useful in daily life. Understanding these concepts will help you make wiser choices and take better actions. This is why developing a broad base of mental models is critical for anyone interested in thinking clearly, rationally, and effectively. The Secret to Great Thinking and Decision Making
Expanding your set of mental models is something experts need to work on just as much as novices. We all have our favorite mental models, the ones we naturally default to as an explanation for how or why something happened. As you grow older and develop expertise in a certain area, you tend to favor the mental models that are most familiar to you.
Here's the problem: when a certain worldview dominates your thinking, you’ll try to explain every problem you face through that worldview. This pitfall is particularly easy to slip into when you're smart or talented in a given area.
The more you master a single mental model, the more likely it becomes that this mental model will be your downfall because you’ll start applying it indiscriminately to every problem. What looks like expertise is often a limitation. As the common proverb says, “If all you have is a hammer, everything looks like a nail.” 3
When a certain worldview dominates your thinking, you’ll try to explain every problem you face through that worldview.
Consider this example from biologist Robert Sapolsky. He asks, “Why did the chicken cross the road?” Then, he provides answers from different experts.
If you ask an evolutionary biologist, they might say, “The chicken crossed the road because they saw a potential mate on the other side.” If you ask a kinesiologist, they might say, “The chicken crossed the road because the muscles in the leg contracted and pulled the leg bone forward during each step.” If you ask a neuroscientist, they might say, “The chicken crossed the road because the neurons in the chicken’s brain fired and triggered the movement.”
Technically speaking, none of these experts are wrong. But nobody is seeing the entire picture either. Each individual mental model is just one view of reality. The challenges and situations we face in life cannot be entirely explained by one field or industry.
All perspectives hold some truth. None of them contain the complete truth.
Relying on a narrow set of thinking tools is like wearing a mental straitjacket. Your cognitive range of motion is limited. When your set of mental models is limited, so is your potential for finding a solution. In order to unleash your full potential, you have to collect a range of mental models. You have to build out your decision making toolbox. Thus, the secret to great thinking is to learn and employ a variety of mental models. Expanding Your Set of Mental Models
The process of accumulating mental models is somewhat like improving your vision. Each eye can see something on its own. But if you cover one of them, you lose part of the scene. It’s impossible to see the full picture when you’re only looking through one eye.
Similarly, mental models provide an internal picture of how the world works. We should continuously upgrade and improve the quality of this picture. This means reading widely from the best books, studying the fundamentals of seemingly unrelated fields, and learning from people with wildly different life experiences. 4
The mind's eye needs a variety of mental models to piece together a complete picture of how the world works. The more sources you have to draw upon, the clearer your thinking becomes. As the philosopher Alain de Botton notes, “The chief enemy of good decisions is a lack of sufficient perspectives on a problem.” The Pursuit of Liquid Knowledge
In school, we tend to separate knowledge into different silos—biology, economics, history, physics, philosophy. In the real world, information is rarely divided into neatly defined categories. In the words of Charlie Munger, “All the wisdom of the world is not to be found in one little academic department.” 5
World-class thinkers are often silo-free thinkers. They avoid looking at life through the lens of one subject. Instead, they develop “liquid knowledge” that flows easily from one topic to the next.
This is why it is important to not only learn new mental models, but to consider how they connect with one another. Creativity and innovation often arise at the intersection of ideas. By spotting the links between various mental models, you can identify solutions that most people overlook. Tools for Thinking Better
Here's the good news:
You don't need to master every detail of every subject to become a world-class thinker. Of all the mental models humankind has generated throughout history, there are just a few dozen that you need to learn to have a firm grasp of how the world works.
Many of the most important mental models are the big ideas from disciplines like biology, chemistry, physics, economics, mathematics, psychology, philosophy. Each field has a few mental models that form the backbone of the topic. For example, some of the pillar mental models from economics include ideas like Incentives, Scarcity, and Economies of Scale.
If you can master the fundamentals of each discipline, then you can develop a remarkably accurate and useful picture of life. To quote Charlie Munger again, “80 or 90 important models will carry about 90 percent of the freight in making you a worldly-wise person. And, of those, only a mere handful really carry very heavy freight.” 6
I've made it a personal mission to uncover the big models that carry the heavy freight in life. After researching more than 1,000 different mental models, I gradually narrowed it down to a few dozen that matter most. I've written about some of them previously, like entropy and inversion, and I'll be covering more of them in the future. If you're interested, you can browse my slowly expanding list of mental models.
My hope is to create a list of the most important mental models from a wide range of disciplines and explain them in a way that is not only easy to understand, but also meaningful and practical to the daily life of the average person. With any luck, we can all learn how to think just a little bit better.
How to become a genius
You probably think you’re pretty smart. Most people believe they’re smarter than the average American, according to a study from YouGov. Yet when it comes to IQ, most of us are indeed average, falling in the 80-119 point range. While this number peaks in our late teens to early 20s and remains relatively stable as we age, that doesn’t mean your potential is fixed.
“The fact is, intelligence can be increased–and quite dramatically,” writes behavior-analytic psychologist Bryan Roche of the National University of Ireland in Psychology Today. “Those who claim that IQ is fixed for life are in fact referring to our IQ test scores, which are relatively stable–not to our intelligence levels, which are constantly increasing.”
David Shenk, author of The Genius in All of Us, says it’s virtually impossible to determine any individual’s true intellectual limitations at any age; anyone has the potential for genius or, at the very least, greatness. The key is to let go of the myth that giftedness is innate.
“A belief in inborn gifts and limits is much gentler on the psyche: The reason you aren’t a great opera singer is because you can’t be one. That’s simply the way you were wired. Thinking of talent as innate makes our world more manageable, more comfortable. It relieves a person of the burden of expectation,” he writes in his book.
If you want to be smarter than the average American, it’s not only possible; it’s within reach. Intelligence is the ability to acquire and apply knowledge and skills, and includes the ability to reason, solve problems, remember information, and be creative. Increasing your intelligence–even taking yours to genius status–takes a willingness to do the work.
While taking a class and reading a book are two ways to learn something new, here are six surprising tasks that boost your brainpower, make learning easier, and put you on the road to greatness: 1. Train Your Memory
While a professor at the University of California, Irvine, Susanne Jaeggi found that an activity known as the n-back task increases fluid intelligence, which is the ability to reason and solve new problems independent of previous knowledge. The n-back game challenges participants to keep track of spoken words or locations on a grid, and identify when a letter or grid location is repeated. N-back training, which is available for free online, can help improve memory and problem-solving skills. 2. Open Yourself to New Points of View
Another way to increase your intelligence is to expand your network and consider other people’s points of view. The exercise will open your mind to new opportunities and promote cognitive growth. Learning is the act of exposing yourself to new information, and meeting new people facilitates the process, especially when the viewpoints conflict with your own.
“Open your mind and listen to arguments that make no sense to you–and try to find some sense in them,” writes Roche. 3. Find Motivation
Uncommon achievement takes a source of motivation, says Shenk. “You have to want it, want it so bad you will never give up, so bad that you are ready to sacrifice time, money, sleep, friendships, even your reputation,” he writes in The Genius in All of Us.
Motivation can be conscious or unconscious, and can spring from a variety of sources, including inspiration, desperation, revenge, or future regret. 4. Do Cardiovascular Workouts
Cardiovascular fitness can raise your verbal intelligence and improve long-term memory, according to a study from the University of Gothenburg in Sweden.
“Increased cardiovascular fitness was associated with better cognitive scores,” writes lead researcher Maria Aberg in the Proceedings of the National Academy of Sciences. “In contrast, muscular strength was only weakly associated with intelligence.” 5. Play Video Games
While it looks like a good way to waste time, video gaming can actually stimulate the growth of neurons and promote connectivity in the regions of the brain responsible for spatial orientation, memory formation, and strategic planning. In a study conducted at the Max Planck Institute for Human Development and Charité University Medicine in Berlin, researchers found that video games such as Super Mario benefit the brain by improving sensory, perceptual, and attentional tasks.
Video games can reverse the negative effects of aging on the brain as well. In a study done at the University of California, San Francisco, researchers found a specially designed 3-D driving video game boosted mental skills like multitasking and focusing in older adults. Gaming improved their short-term memory, a cognitive ability that typically declines with age. 6. Meditate
Mindful meditation can increase the neuroplasticity in the brain, according to a study from the University of Oregon and Texas Tech University’s Neuroimaging Institute.
Participants in a five-day study were led through guided meditation for 20 minutes a day, focusing on breathing, posture, and mental imagery. Researchers found the practice improved the efficiency of their brain’s white matter, significantly improving attention and fluid intelligence.