Ameba Ownd

アプリで簡単、無料ホームページ作成

grasebrutel1972's Ownd

Learn why

2022.01.07 19:40




















Daniel J. Why We Snap. Douglas Fields. Jamie Holmes. A Whole New Mind. Daniel H. Andrew Holtz. The Gift of Dyslexia, Revised and Expanded. Eldon M. Braun and Ronald D. Anil Ananthaswamy. Leonard Mlodinow. Math Without Numbers. Milo Beckman. The Hour Between Dog and Wolf. The Internet of Us. Michael P. Stuart Brown M.


How to Decide. I just had never been called upon to think about this before. We took the value of the business we were in for granted. There is no intellectual equivalent of the hundred-yard dash. An intelligent person is open-minded, an outside-the-box thinker, an effective communicator, is prudent, self-critical, consistent, and so on.


These are not qualities readily subject to measurement. Society needs a mechanism for sorting out its more intelligent members from its less intelligent ones, just as a track team needs a mechanism such as a stopwatch for sorting out the faster athletes from the slower ones. Society wants to identify intelligent people early on so that it can funnel them into careers that maximize their talents. It wants to get the most out of its human resources. College is a process that is sufficiently multifaceted and fine-grained to do this.


College is, essentially, a four-year intelligence test. Students have to demonstrate intellectual ability over time and across a range of subjects. As an added service, college also sorts people according to aptitude. It separates the math types from the poetry types. At the end of the process, graduates get a score, the G.


I could have answered the question in a different way. They will have no incentive to acquire the knowledge and skills important for life as an informed citizen, or as a reflective and culturally literate human being. College exposes future citizens to material that enlightens and empowers them, whatever careers they end up choosing.


In performing this function, college also socializes. It takes people with disparate backgrounds and beliefs and brings them into line with mainstream norms of reason and taste. Independence of mind is tolerated in college, and even honored, but students have to master the accepted ways of doing things before they are permitted to deviate. Ideally, we want everyone to go to college, because college gets everyone on the same page. All that matters is the grades. If you prefer the second theory, then you might consider grades a useful instrument of positive or negative reinforcement, but the only thing that matters is what students actually learn.


A lot of confusion is caused by the fact that since American higher education has been committed to both theories. The system is designed to be both meritocratic Theory 1 and democratic Theory 2. Professional schools and employers depend on colleges to sort out each cohort as it passes into the workforce, and elected officials talk about the importance of college for everyone. We want higher education to be available to all Americans, but we also want people to deserve the grades they receive.


Between and , four hundred and five boys from Groton applied to Harvard. Four hundred and two were accepted. In , Yale received thirteen hundred and thirty applications, and it admitted nine hundred and fifty-nine—an acceptance rate of seventy-two per cent. Almost a third of those who enrolled were sons of Yale graduates. In , through the exertions of people like James Bryant Conant, the president of Harvard, the Educational Testing Service went into business, and standardized testing the S.


Conant regarded higher education as a limited social resource, and he wanted to make more strait the gate. Testing insured that only people who deserved to go to college did.


The fact that Daddy went no longer sufficed. In , the acceptance rate at Harvard was eighty-five per cent. By , it was twenty per cent. Last year, thirty-five thousand students applied to Harvard, and the acceptance rate was six per cent.


Columbia, Yale, and Stanford admitted less than eight per cent of their applicants. This degree of selectivity is radical. To put it in some perspective: the acceptance rate at Cambridge is twenty-one per cent, and at Oxford eighteen per cent. But, as private colleges became more selective, public colleges became more accommodating. Proportionally, the growth in higher education since has been overwhelmingly in the public sector.


In , there were about 1. Today, public colleges enroll almost fifteen million students, private colleges fewer than six million. There is now a seat for virtually anyone with a high-school diploma who wants to attend college. All fascinating stuff. On the other hand, I feel that the book's subtitle 'the new science of education' is misleading; much of the work discussed here isn't strictly 'new', and much of it isn't directly relevant to education.


For instance, the final section discussing the 'four pillars of learning', much of the work relates to animal studies or infants, with only a few light pepperings of education research. To some extent this reflects the dearth of scientific education research, something Dehaene acknowledges in the conclusion, but I wonder how some teachers would react to his 13 key take away messages such as 'enrich the environment', 'accept and correct mistakes', and 'set clear learning objectives'.


Not exactly revolutionary. I note how this subtitle has been changed from 'why brains learn better than any machine, for now' in the hardback edition - much more appropriate. In conclusion, I think this book is best enjoyed with its original subtitle. Don't expect in-depth application of cognitive science to education, and just enjoy the neuroscience!


This will probably be one of my favorite books of the year! It brilliantly takes on the link between education science and neurology as well as cognitive science. Many myths about learning are dismantled in a captivating and clever way. Although I wasn't really interested by the part about machines and their ability to learn in the first part, it ended up being fairly fascinating and most importantly linked up well with the rest of the book.


I would recommend this non-fiction to everyone, especially parents, teachers and doctors because it truly highlights in a very easy manner the basics and new finds about how children learn most effectively.


Truly a masterpiece! Michiel Mennen. A wonderful read. Part homage to the unbelievable capacity of the human brain, compared to what is possible with AI and Machine Learning algorithms. Part specific, hands-on guidance for educators, trainers and people in general on how learning actually works and how to get the most out of learning experiences. From the nitty gritty detail of the inner workings of the brain to the more general conceptual pillars of a solid learning strategy.


The beginning may be a bit of a tough read, but worth following through! And getting a good night sleep after every reading session :. Axel Jantsch. The book describes the mechanisms of human learning and is informed as much by neuroscience and cognitive science as by computer science.


It is in fact surprising how much of Dehaene's explanations and elaborations are supported by the way machine learning algorithms work. I suspect the reason for this is how we study mammalian brains, what is experimentally accessible and what is not. Two directions of research have a relatively long history and have brought key insights: the study of nerve cells at the microscopic level and behavioral experiments that reveal macro-level mechanisms of our brain.


Equipped with a microscope and an artistic talent he produced numerous drawings that shape our perception of the cellular organization of our brain up to today. Cajal revealed the daunting complexity and variety of nerve cells and discovered that they consist of dendrites, a cell body and an axon, and he speculated about the direction of information flow by adding arrows to his diagrams: from the dendritic tree to the cell body to the axon.


Cajal not only discovered that the brain tissue was made up of distinct neural cells but also that they come in contact with each other at points that we today call synapses. Synapses play a key role in learning because they adapt to the activities of information transfer. Their growth, strengthening, weakening and disappearing are key mechanisms of information storage and learning. They adapt in time spans of minutes, hours, days, weeks and months and keep changing during lifetime.


A basic rule of learning, put forward by Donald Hebb, is "if neurons fire together they wire together", meaning that synaptic connections are strengthened when both the pre-synaptic and the post-synaptic neuron is active at the same time. Due to many studies of synaptic adaptations and anatomical changes of neurons in response to neuronal activities, today we understand fairly well the cellular mechanisms underlying learning, memorization and recall.


When neurons, that react to the image of a particular face, are often activated at the same time as neurons representing a name, the connections between these two groups of neurons are strengthened and we associate the face with a the name.


A huge number of clever behavioral experiments have given us detailed insights into a great variety of mental capabilities of the human and mammalian brain. For instance babies are born with an innate capability to count and do approximate arithmetic; they have a number sense. Within the sight of the baby objects are moved behind a curtain, then the curtain is removed and the baby's reaction is observed.


If you move one ball and then another ball behind the curtain, and if there are two balls seen after removal of the curtain, the baby shows no surprise. If there is only one ball or three ball, the baby is very surprised. It works for counting, addition and subtracting. If you move 10 balls behind the curtain, then subsequently remove 5 balls, then the baby is surprised if there are still 8 balls behind the curtain.


Many experiments have confirmed, that it is the abstract numbers, that babies consider, not other physical properties like the shape, color or size of objects. This innate number sense has also been confirmed in many other mammals. For instance experimenters went to great length to make sure that newborn chickens have not seen any object before the experiments, and they still can already reason about numbers.


Behavioral experiments have given us insight about innate capabilities like a number sense, a sense of probabilities, learning languages and recognizing faces. They have shown how we build up episodic memories, how we interact with other people, how we balance long term against short term desires, etc. While these two lines of investigation have given us an extensive and detailed understanding of the human brain, the level in between seems to be harder to decipher.


How do we abstract from pictures of flowers to the concept of flower? How does the neural circuitry correctly predict the trajectory of a ball and direct our body to catch it in flight? How do we estimate that it is safe to cross the street right before a moving care? What is the language of thought, what is its grammar and how do we determine its vocabulary? These and many other feats involve the coordinated activity of huge numbers of neurons in different regions of the brain, the communication of large amounts of information across data highways between brain modules.


They require the integration of several senses and specialized brain circuitry into a unified assessment. They require the coordination over periods of time and the building and usage of symbols and concepts at the right abstraction level, not too low and not too high. Many of these mechanisms are still poorly understood and therefore a theory about how they may work can greatly structure the investigation and search for experimental evidence for confirmation and disapproval of hypotheses.


For learning, the rapidly developing field of machine learning in computer science is inspiring a framework for generating hypotheses about how learning in the brain may work. Therefore, I suspect, Dehaene is drawing heavily on the parallels to deep artificial neural networks, a particularly successful branch of machine learning, to contemplate and explain features of learning in the human brain.


Ironically, artificial neural networks have been inspired by natural neural networks more than 60 years ago, but have only in the last ten years emerged as a highly successful branch in machine learning in the form of Deep Neural Networks DNN.


DNNs, as alluded to in the figure here taken from the book, consist of tens or even hundreds of layers of neurons. Each layer's output is the input of the next layer. The "deep" in DNN refers to the large number of layers. DNNs have been shown to outperform humans in a wide area of tasks like object detection, face recognition, sensor data analysis, medical image analysis, games, transforming of images and videos into fakes, etc.


They have a number of features that certainly also play key roles in human learning. For instance, error correction, is equally important in DNNs as in human learning. In DNNs an input a handwritten "2" in the picture is processed sequentially by all the layers and a response is generated, which is the recognition of a number in this case. During training of the DNN, the correct answer is then presented to the DNN, and the error is propagated back from the output all the way to the input of the network.


During this back-propagation, parameters are adjusted such, that the same image would be more correctly processed the next time. After this training procedure is repeated hundreds or thousands of times with different images, the parameters in the DNN are adjusted such, that it produces correct output for all images in the training set. Moreover, it also performs very well for images that are sufficiently similar to the images in the training set.


Hence, the DNN in fact abstracts from the individual images. The larger the training set and the deeper the DNN the more powerful it becomes in terms of abstraction and object recognition. If you are unhappy with the performance of your layer network, simply use a layer network, increase the training set tenfold, run the training and the results will most likely astonish you.


Eight Definitions of Learning The book starts out with eight definitions of learning: 1 To learn is to form an internal model of the external world 2 Learning is adjusting the parameters of a mental model 3 Learning is exploiting a combinatorial explosion 4 Learning is minimizing errors 5 Learning is exploring the space of possibilities 6 Learning is optimizing a reward function 7 Learning is restricting a search space 8 Learning is projecting A Priori Hypotheses These definitions seem to be almost exclusively inspired by computer science, and exhibit little trace from neuroscience or psychology.


However, they seem to apply exceedingly well to what is actually happening in the human brain, which lends support to the idea that a computer science inspired theoretical framework of learning can be very fruitful in researching the brain's capabilities and mechanisms.


However, Dehaene is quick to point out, that there are a number of routine accomplishments of the human brain, that machine learning algorithms cannot replicate. For example, DNNs need thousands or, better, hundreds of thousands of examples in the training session, while humans can learn very effectively from few, or even only one given example. DNNs are most effective when the correct answer for the training set is known, which is called supervised learning.


Human learning is mostly unsupervised, and very effective in that. Humans learn often by analogy, transferring rules and patterns from one domain to another. If you play a lot of chess, your chess game strategies may inform you when planing your career. When you do a lot of high effort hiking, you may take its lessons about ups and downs, effort and sweat, planning and perseverance into other domains like starting a business. Machine learning algorithms cannot do this today and are not close to it.


Also, humans are good in penetrating new territory and learning the regularities of a new domain even if they know nothing or little about it at the outset. Human "learning is inferring the grammar of a domain", writes Dehaene. Again, computer science have yet to come up with algorithms that can do that. As a result, research in artificial intelligence and cognitive science is mutually inspiring. Computer scientists see in the example of the brain, what is possible, using them to find ways to accomplish similar behavior.


Neuroscientists can hypothesize and test if mechanisms developed successfully for machine learning also are at work in the human brain. Memory in the Brain Dehaene describes our current understanding of memory structures in the brain as follows.


Working memory consists of activity patterns of neurons. There is no permanent, physical change in the brain underlying working memory. If the current pattern of activity changes, the current content of active memory is lost. The amount of information in working memory is very limited; only few symbols can be stored there. The duration is also limited and after a few seconds the content in working memory is fading away.


Episodic memory is a weird thing. Episodic memory gives us identity, history and stability over time. It seems highly efficient. Streams of events, images and scenarios are continuously recorded without effort.


It seems also highly effective giving priorities to important events that are remembered years later while unimportant events are quickly fading into oblivion.


The hippocampus, a brain module below the cortex, is the gate keeper to episodic memory recording the unfolding episodes of our lives. Neurons in the hippocampus seem to memorize the context of each event: they encode where, when, how, and with whom things happened.


They store each episode through synaptic changes, so we can remember it later. The famous patient H. So while the hippocampus records episodic memory, it is not necessarily stored there.


Also, H. But he did not remember that, when and how he has acquired a new skill. Semantic memory While the hippocampus is key in recording new memories, they are stored throughout the brain. At night, the brain plays them back and moves them to a new location within the cortex.


You can also search for this author in PubMed Google Scholar. Correspondence to Aditi Pai. Reprints and Permissions. Pai, A.


Free to learn: why unleashing the instinct to play will make our children happier, more self-reliant, and better students for life.


Evo Edu Outreach 9, 1 Download citation. Received : 01 February Accepted : 03 February Published : 09 February Anyone you share the following link with will be able to read this content:. Sorry, a shareable link is not currently available for this article.