This article is an oldie but goodie and I put it up for those of you who haven’t read it before. The comments from parents and educators are worth reading as well. CLICK HERE to access the piece at the New York Magazine website. We are putting a lot of pressure on our little 4-year-olds to prove themselves worthy of a better education before many can write their own names. But what’s a parent to do? This is not only a NYC phenomenon. Parents all over the country must prepare their children – who have only learned to talk in the last 3 years – to solve complex analogies and compute addition and subtraction in hopes of getting them a spot in a competitive school. If they don’t get qualify at the kindergarten level, it becomes much harder to get them in later when open spots in gifted programs depend on attrition. This article is worth reading if you haven’t seen it before.
The Junior Meritocracy
Should a child’s fate be sealed by an exam he takes at the age of 4? Why kindergarten-admission tests are worthless, at best.
Skylar Shafran, a turquoise headband on her brunette head and a pink princess shirt on her string-bean frame, is standing on a chair in her living room, shifting from left foot to right. She has already gulped down a glass of orange juice and nibbled on some crackers; she has also demonstrated, with extemporaneous grace, the ability to pick up Hello Kitty markers with her toes. For more than an hour, she has been answering questions to a mock version of an intelligence test commonly known to New York parents as the ERB. Almost every prestigious private elementary school in the city requires that prospective kindergartners take it. Skylar’s parents, Liz and Jay, are pretty sure they know where they’re sending their daughter to school next year, but they figure it can’t hurt to get a sense of where she sits in the long spectrum of precocious New York children. And so, although it wasn’t cheap—$350—they’ve hired someone to find out. Skylar has thus far borne this process with cheerful patience and determination. But every 4-year-old has her limits.
“What is an umbrella?” asks the evaluator, a psychology graduate student in her mid-twenties.
“To keep me dry.”
“And what is a book?”
“Something you read.”
“What is a house?”
Skylar squirms, teeters a bit.
“A house?” the tester repeats.
Skylar looks at her mother. “I have to go pee.”
Later, when the evaluation is over, Liz confesses she’s ambivalent about inviting a stranger into her home to assess her 4-year-old and even more ambivalent about the idea of prepping her for a standardized test, should it turn out she needs preparation. “It’s just that I want choices for her,” she says. “It’s an immigrant mentality. You want what’s best for your kid.”
The beauty of a meritocracy is that it is not, at least in theory, a closed system. With the right amount of pluck and hard work, a person should be able to become whoever he or she is supposed to be. Only in an aristocracy is a child’s fate determined before it is born.
Yet in New York, it turns out that an awful lot is still determined by a child’s 5th birthday. Nearly every selective elementary school in the city, whether it’s public or private, requires standardized exams for kindergarten admission, some giving them so much weight they won’t even consider applicants who score below the top 3 percent. If a child scores below this threshold, it hardly spells doom. But if a child manages to vault over it, and in turn gets into one of these selective schools, it can set him or her on a successful glide path for life.
Consider, for instance, Hunter College Elementary School, perhaps the most competitive publicly funded school in the city. (This year, there were 36 applicants for each slot.) Four-year-olds won’t even be considered for admission unless their scores begin in the upper range of the 98th percentile of the Stanford-Binet Intelligence Scales, which costs $275 to take. But if they’re accepted and successfully complete third grade (few don’t), they’ll be offered admission to Hunter College High School. And since 2002, at least 25 percent of Hunter’s graduating classes have been admitted to Ivy League schools. (In 2006 and 2007, that number climbed as high as 40.) Or take, as another example, Trinity School. In 2008, 36 percent of its graduates went to Ivy League schools. More than a third of those classes started there in kindergarten. Thirty percent of Dalton’s graduates went to Ivies between 2005 and 2009, as did 39 percent of Collegiate’s, and 34 percent of Horace Mann’s. Many of these lucky graduates wouldn’t have been able to go to these Ivy League feeders to begin with, if they hadn’t aced an exam just before kindergarten. And of course these advantages reverberate into the world beyond.
Given the stakes, it’s hardly a surprise that New Yorkers with means and aspirations for their children would go to great lengths to help them. Rather, what’s surprising is that a single test, taken at the age of 4, can have so much power in deciding a child’s fate in the first place. The fact is, 4 is far too young an age to reach any conclusions about the prospects of a child’s mind. Even administrators who use these exams—indeed, especially the administrators who use these exams—say they’re practically worthless as predictors of future intelligence. “At information meetings,” says Steve Nelson, head of the famously progressive Calhoun School, “I’ll often ask a room full of parents when their children started to walk.” Invariably, their replies form a perfect bell curve: a few at 9 and 10 months, most at 12 or 13, a few as late as 15 to 18. “And then I’ll ask: ‘What would you think if you were walking down the street, and you saw a parent yanking a 1-year-old child up from the sidewalk, screaming, ‘Walk, damn it?’ ” The same, he says, is true of a system that insists a child perform well on a test at 4 years of age. “Early good testers don’t make better students,” he tells me, “any more than early walkers make better runners.”
Let’s start with the most basic problem: School starts in kindergarten. No matter how a child is doing at that moment, no matter where that child is in the great swoop of his or her developmental arc, that’s when parents send their kids off to school. Given this very concrete constraint, standardized tests seem as fair a means as any to find gifted 4-year-olds—if not the fairest, considering the city’s tremendous cultural and socioeconomic diversity. That one test-taking experience may be the sole experience all kids share, and their scores the sole application datum that’s neither prejudicial (like a family’s net worth) or subjective (like recommendations from nursery schools). Unfortunately, not all city schools use the same tests, which means that first-time parents, already overwhelmed by the usual formalities of school enrollment, are forced to cut through a smog of acronyms. New York City public schools use the Otis-Lennon School Ability Test, or OLSAT, to help determine which students are eligible for their gifted-and-talented programs. The private schools use a modified version of the Wechsler Preschool and Primary Scale of Intelligence, or WPPSI-III, pronounced “whipsy.” (Yet because the Educational Records Bureau administers it—for a cost of $495—it is still better known to some parents as the ERB.) Hunter, because it operates under the auspices of Hunter College rather than the Department of Education, uses the fifth edition of the Stanford-Binet Intelligence Scales, or SB-5, to narrow down its first round of applicants. How these tests differ is mainly a question of emphasis and style: The OLSAT looks much more like an actual school exam—it’s administered by a licensed teacher, answered in multiple-choice bubbles in a workbook, and a bit more biased in content toward school readiness, like following verbal directions—while the WPPSI and SB-5 are IQ tests, interpreted by psychologists and more biased toward abstract reasoning. But the truth is, all three are pretty similar, at least at this level. As W. Steven Barnett, co-director of Rutgers’ National Institute for Early Education Research, notes: “Odds are they’re all going to have kids do something with triangles.”
Those who are bullish on intelligence tests argue they’re “pure” gauges of a child’s mental agility—immune to shifts in circumstance, immutable over the course of a lifetime. Yet everything we know about this subject suggests that there are considerable fluctuations in children’s IQs. In 1989, the psychologist Lloyd Humphreys, a pioneer in the field of psychometrics, came out with an analysis based on a longitudinal twin study in Louisville, Kentucky, whose subjects were regularly IQ-tested between ages 4 and 15. By the end of those eleven years, the average change in their IQs was ten points. That’s a spread with significant educational consequences. A 4-year-old with an IQ of 85 would likely qualify for remedial education. But that same child would no longer require it if, later on, his IQ shoots up to 95. A 4-year-old with an IQ of 125 would fall below the 130 cutoff for the G&T programs in most cities. Yet if, at some point after that, she scores a 135, it will have been too late. She’ll already have missed the benefit of an enhanced curriculum.
These fluctuations aren’t as odd as they seem. IQ tests are graded on a bell curve, with the average always being 100. (Definitions vary, but essentially, people with IQs of 110 to 120 are considered smart; 120 to 130, very smart; 130 is the favorite cutoff for gifted programs; and 140 starts to earn people the label of genius.) If a child’s IQ goes down, it doesn’t mean he or she has stopped making intellectual progress. It simply means that this child has made slower progress than some of his or her peers; the child’s relative standing has gone down. As one might imagine, kids go through cognitive spurts, just as they go through growth spurts. One of the classic investigations into the stability of childhood IQ, a 1973 study by the University of Pittsburgh’s Robert McCall and UC–San Diego’s Mark Appelbaum and colleagues, looked at 80 children who’d taken IQ tests roughly once a year between the ages of 2½ and 18. It showed that children’s intellectual trajectories were marked by slow increases or decreases, with inflection points around the ages of 6, 10, and 14, during which scores more sharply turned up or down. And when were IQs the least stable? Before the age of 6. Yet in New York we track most kids based on test scores they got at 4. (And we may not even be the worst offenders: As Po Bronson and Ashley Merryman note in their new book, NurtureShock, there are cities with preschools that require IQ tests of 2-year-olds.) “How can you lock children into a specialized educational experience at so young an age?” asks McCall. “As soon as you start denying kids early, you penalize them almost progressively. Education and mental achievement builds on itself. It’s cumulative.”
Most researchers in the field of childhood development agree that the minds of nursery-school children are far too raw to be judged. Sally Shaywitz, author of Overcoming Dyslexia, is in the midst of a decades-long study that examines reading development in children. She says she couldn’t even use the reading data she’d collected from first-graders for some of the longitudinal analyses. “It simply wasn’t stable,” she says. I tell her that most New York City schools don’t share this view. “A young brain is a moving target,” she replies. “It should not be treated as if it were fixed.”
Complicating matters further, IQs are least stable at the highest end of the spectrum no matter what age they’re assessed. The explanation for this is simple: There’s more room to fall the higher you go, and hence a greater likelihood that the score will regress toward the mean. Chance figures more prominently into high scores—a good night’s sleep, comfort with the tester—and lucky guesses on tough questions are worth more points than answers to midrange questions. In 2006, David Lohman, a psychologist at the University of Iowa, co-authored a paper called “Gifted Today but Not Tomorrow?” in the Journal for the Education of the Gifted, demonstrating just how labile “giftedness” is. It notes that only 45 percent of the kids who scored 130 or above on the Stanford-Binet would do so on another, similar IQ test at the same point in time. Combine this with the instability of 4-year-old IQs, and it becomes pretty clear that judgments about giftedness should be an ongoing affair, rather than a fateful determination made at one arbitrary moment in time. I wrote to Lohman and asked what percentage of 4-year-olds who scored 130 or above would do so again as 17-year-olds. He answered with a careful regression analysis: about 25 percent.
The implications of this number are pretty startling. They mean that three quarters of the seniors in a gifted program would no longer test into that program if asked to retake an IQ test on graduation day. So I wrote Lohman back: Was he certain about this?
“Yes,” he replied. “Even people who consider themselves well versed in these matters are often surprised to discover how much movement/noise/instability there is even when correlations seem high.” He was careful to note, however, that this doesn’t mean IQ tests have no predictive value per se. After all, these tests are better—far better—at predicting which children will have a 130-plus IQ at 17 than any other procedure we’ve devised. To have some mechanism that can find, during childhood, a quarter of the adults who’ll test so well is, if you think about it, impressive. “The problem,” wrote Lohman, “is assigning kids to schools for the gifted on the basis of a test score at age 4 or 5 and assuming that their rank order among age mates will be constant over time.”
Appelbaum, McCall’s co-author, puts an even finer point on the stakes. “No university I know,” he says, “would think of using a 4-year-old’s data to decide who to admit.”
A January 5 thread from the parenting website DCurbanmom:
Can anyone offer advice on whether I should by [sic] Aristotle Circle? I’m in a time crunch. Thanks!
My sister-in-law bought Aristotle Circle workbook and showed it to me. As a child psychologist, the workbook is so close to the real thing, I think it is cheating. That said, my nephew aced the test …
It is so sad that we have to do this—but what to do? [dear child] is at a disadvantage if everyone else is prepping and we are not.
There was a time, not that long ago, when few parents attempted to prep their 4-year-olds for kindergarten-admission exams. But then a few more began to do it, and then a few more after that, and then suddenly, normal-seeming people with normal-seeming values began doing it, too, and an arms-race mentality kicked in. Responding to parents’ anxieties and fears, some of the fancier preschools began subtly prepping their students—giving them similar exercises to do with blocks, introducing them to the concept of analogies. Expensive test-prep kits suddenly began to appear on the market. And high-end education consultancies began to bloom, like Aristotle Circle. Founded in 2008 by an M.I.T. graduate and former Wall Street analyst named Suzanne Rheault, it provides tutors, advisers, and—most important—prep books for apprehensive and even merely conscientious parents.
“I can understand people getting offended by 4-year-olds getting tutoring for these exams,” says Rheault when we meet in her Soho conference room. “But I’m not the one making them take them.”
She dumps a bag of blocks onto the conference table. They’re essentially the same ones used on the WPPSI, except hers are white and blue rather than white and red. Then she plops down her meticulous, brightly designed prep book, which she just completed last August. She opens to the “Vocabulary” section, illustrated by a former cartoonist for Disney. “Any vocabulary the child needs,” she tells me, “is in this book,” whether it’s to complete picture analogies or understand questions that are asked of them. Then she flips to a section of the types of questions the children will be asked aloud—What is a villain? What is a liquid?—and a few pages after that, she gets to what she believes is the “core intellectual meat” of the exam: “Concept groupings,” or pages of pictures organized by how the objects in them are linked. Containers: picnic baskets, suitcases, matchboxes. Things that open and close: zippers, eyes, locks. Measuring instruments: hourglasses, watches, thermometers. “Any of the abstract groupings the child needs to understand are also here,” she tells me.
How does Rheault know all this? I ask her, incredulously. Has she seen one? You have to be specially registered with the publisher to buy the WPPSI. Like most IQ tests, it is updated only periodically, which makes it coveted by parents—if you’ve seen one lately, you’ve likely seen the version your child will take.
“I’m not going to talk about it,” she replies. “But the people who helped us develop the workbook are psychologists who’ve seen them.”
But copies of this test are obviously floating around. Skylar’s mother, for instance, says she was offered a copy of the WPPSI by a fellow mom. Type a few key search words on Urbanbaby.com, and within 30 seconds you’ll find this post: Have WWPSI-III to sell. Excellent condition. Complete set. E-mail me if you are serious and discreet. No questions asked. Cost is $3,000. (An e-mail address follows.) This past fall, a parent admitted to a psychologist who administers SB-5 tests for Hunter that he’d purchased a copy of the exam right off the publisher’s website. “The type of tests we sell are primarily for special education, so it’s never been an issue for us in the past,” says Elizabeth Allen, the director of research and development of Pro-Ed Inc., which only recently acquired the rights to the Stanford-Binet. “When I heard, I was like, ‘You’re kidding me! Some parent paid a thousand dollars so they could get their kid into a gifted program? Wow.’ ” (The company has since fixed the problem; now only licensed professionals can buy them.)
There are some who insist that studying for these exams can’t possibly budge a child’s scores. “I don’t know how prepping could help on the OLSAT,” says Anna Commitante, head of the Gifted and Talented programs for the city’s Department of Education. But Rheault can’t believe there’s still any debate about the subject. “The psychologists we work with,” she says, “say that 50 to 60 percent of the material is learnable.” Yes, her point of view may be colored by her commercial interests—her WPPSI prep books go for $500, and she’s now completing a workbook for the OLSAT and will shortly start one for the SB-5. But she’s hardly alone in her beliefs. “When people say this stuff isn’t really coachable, I always scratch my head and say, ‘Yeah, except for the parts that are,’ ” says Jonathan Plucker, director of the Center for Evaluation and Education Policy at Indiana University. “I understand the nature/nurture debate. It’s a complicated relationship. But to say that families with greater means and more interest in education can’t influence test outcomes—I can’t understand that reasoning. It’s common sense.”
The practice of prepping can run families into the thousands of dollars, posing a clear disadvantage to those who can’t afford it. But the truth is, even without coaching, children coming from economically and culturally rich backgrounds do far better on these tests. And that’s a far more urgent reason to challenge the widespread reliance on them.
“An analogy people use a lot for this is planting corn,” says Barnett, from Rutgers. “If you want to know about the properties of different kinds of corn, you have to plant it in land that’s well fertilized and well irrigated. If you plant it in soil that’s dried up and rocky, you won’t know, because nothing will grow.” The same, he explains, goes for children. How can one possibly know anything about their minds if they’ve spent their first four years in unstimulating environments?
“People have the idea that with these tests you can cancel out socioeconomic background and get to some real thing in the kid,” agrees Nicholas Lemann, dean of the journalism school at Columbia and author of The Big Test, a history of the SAT. “That’s a chimera. If you’re a 4-year-old performing well on these tests, it’s either because you have fabulous genetic material or because you have cultural advantages. But either way, the point is: You’re doing better because of your parents.”
Rather than promoting a meritocracy, in other words, these tests instead retard one. They reflect the world as it’s already stratified—and then perpetuate that same stratification.
“Instead of giving IQ tests, you could just as easily look at Zip Codes and the education levels of the parents to determine who gets the better schooling—you get a very high correlation between IQ and socioeconomic status in the first seven or eight years of life,” says Samuel J. Meisels, assessment expert and president of Chicago’s Erikson Institute, the renowned graduate school in childhood development. “Giftedness is a real thing, no question. But giftedness can be extinguished, and it can be nurtured.” He mentions a New York Times education analysis from 2008, which noted that after the city streamlined its G&T program, requiring specific cutoff scores for the OLSAT, the percentage of white students had shot up from 33 to 48 percent, while the percentage of black and Hispanic enrollment had fallen. “Sometimes,” he says, “you look at a big city’s decisions to do this and wonder if it’s about nurturing giftedness or if it’s about keeping middle-class families in the city limits.”
Skylar is allowed her potty break. She returns and stands on top of her chair.
“Okay!” says her evaluator, smiling. “So … what is a house?”
“I already know. A home.”
She gives Skylar a playful look and tips her head. “And what’s a home?”
Skylar mirrors her tipped head. “A house!”
She laughs. “What’s a bird?”
Skylar picks up her Hello Kitty pen and bounces it on her tester’s arm. “Look, a hopping marker!”
Her tester smiles. “What’s a bird—”
Skylar races the pen up and down. “Vrooooooooom! Magic marker! Vroom vroom!”
Watching this exchange is a reminder of something any parent knows: Four-year-olds, no matter how smart and delightful they may be, have obvious limits as test takers. Many, especially boys, can’t sit still for the full duration of an exam; others can’t stay awake or concentrate for that long, choosing at some catastrophic point to crawl under their desks and give up. Nor is the context in which these tests are administered exactly relaxing for young children. Both IQ tests require that they sit alone in a room with a tester they probably haven’t seen before. In the case of the WPPSI, the tester often isn’t allowed to prompt the children to give more complete answers, even if it’s clear they’re capable of delivering them (and would score better if they did). In the case of the OLSAT, the testers can’t even repeat the questions.
“What is a pet?”
“An animal. I have pet goldfish.”
Her tester decides to play along this time. “Do they have names?”
“Zoe and Tangerine.”
Skylar plants her marker next to a rectangular-shaped sticker she’d gotten as a reward for a previous exercise and admires the shape she’s just made. “Look! A flag!”
Stephen J. Bagnato, a professor of pediatrics and psychology at the University of Pittsburgh, is fond of quoting Head Start co-founder Urie Bronfenbrenner, who in 1977 famously wrote, “Much of contemporary developmental psychology is the science of the strange behavior of children in strange situations with strange adults for the briefest possible periods of time.” It’s hard not to think about that observation in the context of intelligence-testing 4-year-olds. The script is so rigid, the tasks are so narrow and precise. Skylar did extremely well on her evaluation. Yet to me, the loveliest and most intellectually revealing moment was when she blew off all rules and made that whimsical little flag. If it were a real exam, the tester wouldn’t even have written it down. “Well, right,” says Bagnato. “When the examiner can only say certain things to these kids, and the child can only say certain things back, of course it’s too confining. We know that the way kids display their skills best is through creative play and everyday interactions at home and at school.”
As it turns out, intelligence tests miss lots of things, not just creativity. And perhaps that explains why IQs alone are not especially good predictors of excellence. In the twenties, for instance, Lewis Terman, a psychologist and deep believer in intelligence testing—it was he who revised Alfred Binet’s original test and came up with the Stanford-Binet model—started a now-famous longitudinal study of nearly 1,500 California children with extremely high IQs. He grandiosely called it “Genetic Studies of Genius,” and his hope was to show that these children, whom he called “exceptionally superior,” would one day form the backbone of the nation’s intellectual and creative elite, making crucial advances in sciences and public policy and the arts. But as David Shenk, author of the forthcoming The Genius in All of Us, points out, his subjects only grew less and less remarkable as time wore on. None won Nobel Prizes, though two who were specifically rejected for the study—William Shockley and Luis Alvarez—did, both in physics. None became world-renowned musicians, though two other rejects—Isaac Stern and Yehudi Menuhin—did, for their virtuosic violin-playing. In Outliers, Malcolm Gladwell makes a similar point, noting that one’s IQ needn’t be super-high to succeed; it simply needs to be high enough. “Once someone has reached an IQ of somewhere around 120,” he writes, “having additional IQ points doesn’t seem to translate into any measurable real-world advantage.” In Genius Revisited, Rena Subotnik, director of the American Psychological Association’s Center for Gifted Education Policy, undertook a similar study, with colleagues, looking at Hunter elementary-school alumni all grown up. Their mean IQs were 157. “They were lovely people,” she says, “and they were generally happy, productive, and satisfied with their lives. But there really wasn’t any wow factor in terms of stellar achievement.”
So what do psychologists and educators think makes the difference between good and exceptional? Opportunity, connections, mentors. Perseverance and monomaniacal devotion, or what the psychologist Ellen Winner calls “the rage to master.” Creativity, a willingness to fail. Nelson, the head of Calhoun, can go on at urgent, passionate length about this.
“I want a school full of kids who daydream,” he says. “I want kids who are occasionally impulsive. I want kids who are fun to be with. I want kids who don’t want to answer the questions on those tests in the way the adult wants them to be answered, because that kid is already seeing the world differently. In fact,” he adds, after thinking it over for a moment, “I want kids who are cynical enough at age 4 to know that there’s really something wrong with someone asking them these things and think, ‘I’m going to screw with them in the process!’ ”
Granted, Calhoun is an unusual school, a place where kids don’t even get test scores until they’re freshmen. But one needn’t be particularly subversive to appreciate Nelson’s philosophy of educating 4-year-olds, or his frustration with current practice. “You have to play with blocks,” he says. “You have to make up stories. You have to muck around. Arithmetic and decoding language aren’t life—they’re symbolic representations of other things. And education is being diverted into focusing on these symbolic representations of the very experiences kids are being denied.”
Nelson says he’s considering scrapping the WPPSI as an admission requirement for Calhoun’s lower school, possibly starting as early as next year. As it is, he barely takes a kid’s score into account. One of the most compelling reasons to get rid of it, he notes, isn’t because the test is intellectually pointless. It’s because it’s emotionally insidious. “When we resort to any kind of measure of kids that’s supposed to be qualitative at a young age,” he says, “no matter how cheerfully we do it, no matter how many lollipops we hand out to de-stress the process, young children are extraordinarily discerning. They absorb their parents’ anxiety about it, they absorb the kinds of judgments people are making about them. So there’s a process of organizing kids in a hierarchy of worth, and it’s beginning at an age that’s criminal.”
The irony is that doing well on these exams can be just as damaging as doing poorly on them. “Gifted” is an awfully uncomfortable label for some children to wear. It can cripple their thinking, make them terrified of risk. “It’s not entirely inaccurate to observe that more and more high-achieving students go off to university and don’t care about anything,” says Nelson. “They don’t ask questions, they don’t have original ideas. And it’s not because there’s anything wrong with them, but because they were conditioned to believe that learning is about giving back the right answer.” Nelson knows it’s heresy to say this, but he wonders if it’s true. “These tests, at 4, start that long process of conditioning,” he says. “Right then, children start to believe that learning means pleasing the powerful adult in whose presence you are.”
It’s unlikely that most city schools will follow Nelson’s lead and stop testing 4-year-olds. But it is possible that these tests could earn less and less weight in the selection process as they become tainted by excessive prepping and anxiety. That doesn’t mean, however, that the selection process will become more democratic. “I’m afraid schools will be judging the child in ways that aren’t any better,” says Emily Glickman, founder of Abacus Guide Educational Consulting. “There’ll just be more weight on the school report, and what the nursery-school director says about the child verbally. And often kids who come from expensive, high-cachet nursery schools have elaborate evaluations written about them, because the preschool directors themselves have a high stake in the class’s placement success.” And in the case of private schools, she notes, even more emphasis may be given to a family’s socioeconomic status: “The kindergarten-admission process has always been about openly judging a 4-year-old and secretly judging the parents’ wealth, connections, and likeliness to give.”
Giving less weight to these tests doesn’t guarantee that the selection process would become more sensible, either, or more sensitive to finding those children who’d profit from an enriched education. After all, what mechanism should schools use?
This is the hardest question. Most education researchers can tell you just what’s wrong with intelligence-testing 4-year-olds. But few can tell you what should emerge in its stead. “Before we adopted the OLSAT,” says the Department of Education’s Commitante, “we had 32 different school districts using a huge … a tremendous variety of assessments.” Some, she says, relied on expensive IQ tests; others required teacher evaluations. The result was a hodgepodge of arbitrary standards—ones that, the city believed, worked against children who spoke English as a second language (the OLSAT is given in eight languages) or had lower incomes (the city gives the OLSAT for free).
Given his druthers, Meisels, at Erikson Institute, says he’d try to get a more comprehensive picture of the child. “And that can only be found through watching children in classroom situations,” he says. “And looking at the products of their work. And getting to know them. And that can be done through observational assessments.”
I try to interrupt him, but he anticipates my objection. “It’s not very practical, I know,” he says. “It means teaching teachers how to do it. It’d be more expensive. But you could do it. And then you’d get the right kids into these differentiated programs.”
Many researchers agree with him—and will add, as Meisels later does in our conversation, that kids ought never to be evaluated just once. “If one believes that kids do learn and improve,” says McCall, “then a few new kids should be eligible for gifted programs each year.”
If you’re looking for practical answers though, Plucker, of Indiana, has a modest proposal. He suggests that schools assess children at an age when IQs get more stable. And in fact, that’s just what City and Country, one of Manhattan’s more progressive schools, does. Standardized tests aren’t required of their applicants until they’re 7 or older. “That way, the kids are further along in their schooling,” explains Elise Clark, the school’s admissions director. “They’re used to an academic setting, they can handle a test-taking situation, and overall, we consider the results more reliable.” Even then, she says, her school still doesn’t weight IQ scores very much. “If we did, what we’d have is a group of kids with good test-taking skills and … I don’t know what else.”
But my money’s on the marshmallow test. It’s quite compelling and, apparently, quite famous—Shenk talks about it with great relish in The Genius in All of Us. In the sixties, a Stanford psychologist named Walter Mischel rounded up 653 young children and gave them a choice: They could eat one marshmallow at that very moment, or they could wait for an unspecified period of time and eat two. Most chose two, but in the end, only one third of the sample had the self-discipline to wait the fifteen or so minutes for them. Mischel then had the inspired idea to follow up on his young subjects, checking in with them as they were finishing high school. He discovered that the children who’d waited for that second marshmallow had scored, on average, 210 points higher on the SAT.
Two hundred and ten points. Can Princeton Review boast such a gain? Maybe our schools ought to be screening children for self-discipline and the ability to tolerate delayed gratification, rather than intelligence and academic achievement. It seems as good a predictor of future success as any. And Mischel’s test subjects, too, were just 4 years old.
DNAinfo.com is the BEST source of info on what is happening with NYC G&T. If you would like to read this article at the website, CLICK HERE. To keep up with what is happening in NYC with school news, this is a fantastic source! This article covers parents’ collective anger and frustration over the scoring snafu in with the NYC G&T tests.
NEW YORK CITY — Rares Benga’s 4-year-old son, Luca, scored in the 99th percentile on the city’s gifted and talented exam when the city announced the results earlier this month.
That put Luca in an elite group of just 1,363 New York City kids who got the best possible score on this year’s test, giving them first pick of the city’s most sought-after public gifted programs.
But a week later, the Department of Education uncovered a scoring error at testing company Pearson and announced that the number of kids scoring in the 99th percentile had swelled to more than 2,560. That vastly increased Luca’s competition for a school spot.
Now, Benga and other parents are questioning the newly released scores, saying there are so many high-scoring kids that there must be another mistake at Pearson.
“The 99 percentile bracket is absurdly large,” said Benga, an Upper West Side dad who works in marketing analytics at a financial firm. “All I want is fairness.”
He fired off letters Monday to the City Council’s Education Committee and the DOE calling for an independent commission to audit the scores and to release detailed data and information about the scoring methodology.
He said that releasing the data was the only way to ensure “credibility” in this year’s admissions process.
“Ninety-nine is meaningless the way they do it,” said Benga, who hopes his son will win a spot at the ultra-competitive Anderson School, one of five citywide gifted programs. “The entire methodology is highly suspect.”
In the wake of the Pearson errors, many parents are questioning the validity of this year’s record-high number of students qualifying for the city’s gifted and talented programs. Some are calling it Testing GATE — a clever play on the acronym for Gifted and Talented Exam.
The Department of Education changed this year’s G&T test in the hopes of making it more difficult to prepare for after too many kids qualified for the limited number of seats in previous years. Yet, the new, harder test resulted in even more children qualifying — a nearly 33 percent spike — once the DOE announced that scoring errors had been made by Pearson.
Overall, more than 11,700 children were deemed eligible out of 36,012 test takers — or 32.5 percent — versus last year’s 9,644 out of 39,353 — or 24.5 percent.
The DOE found an additional 2,700 students qualified for district seats and more than 2,000 others were in the 97th percentile and eligible for the five elite citywide schools.
Only six students would have lost their eligibility because of the scoring error, DOE officials said. The department would not change their percentile ranks because of Pearson’s mistakes, so those children were allowed to keep their initial, higher scores, officials said.
Many parents on Internet forums across the city were outraged when they learned of the errors — especially those with kids in the 99th percentile where the competition for limited seats became even more fierce.
A Park Slope lawyer whose daughter got a perfect score is even exploring legal options over the results, she said. She asked to remain anonymous.
Another parent, a mathematician, began analyzing the scores and thought the big increase in the number of qualifying kids suggested some red flags.
Alexey Kupstov, a professor at NYU’s Courant Institute of Mathematical Science whose 4-year-old daughter Sofia received a perfect score on the gifted test, scratched his head at the low number of students — just six — who would have been ineligible because of the scoring error.
He said it would have made more sense if there were either zero students who became ineligible or thousands, just as there were thousands who became eligible because of the mistake.
Without having access to the data, Kupstov couldn’t know for sure what happened, so he wrote to Schools Chancellor Dennis Walcott asking for it.
“I believe that there is still a mistake in their calculation methodology,” he wrote last week. “Is it possible to initiate a check in the calculations by Pearson?”
Kupstov, a 33-year-old Manhattan Beach resident, told DNAinfo.com New York he has not received any data despite requests.
He’s concerned that even though his daughter Sofia scored 160 out of 160 on the nonverbal part of the test and 150 out of 150 on the verbal part, she will have a slim chance of getting a gifted seat since so many other children also did well.
“With Sofia, I feel she would be bored in a general education class,” he said. “With probability, I feel we won’t have a chance [at a citywide program]. I don’t mind a lottery, but I think the DOE should be consistent [about scoring].”
Pearson officials said they made three separate errors, including the way kids’ ages were used to calculate scores, a mistake in the score-conversion tables and a mistake in the mathematical formula for combining the verbal and nonverbal portions of the test.
Kupstov believes that when Pearson fixed its mistakes in calculating the New York City scores, the company did not fix similar mistakes in the calculation of national averages, which would affect the number of local kids considered high-scoring.
“What I suspect is that they were using exactly the same methodology [nationwide] and had this error in their system forever, but noticed the problem only when New York City parents came forward and challenged the results,” Kupstov said, adding that he could not be sure without seeing the data.
Even before Pearson’s errors were made public, the local group Parents for Fair Education was pushing for the DOE to use composite scores, so someone who got no questions wrong would be ranked above a child who got one question wrong rather than placed in a lottery with others in the 99th percentile.
The group has a petition with more than 400 signatures calling for the change. The DOE had initially said it would use composite scores this year, but then reversed course.
“If the methodology is wrong,” said Benga, the Upper West Side parent, “then they should use the composite scores since it’s more likely those are correct.”
Michael McCurdy, co-founder of TestingMom.com, a test preparation website, also questioned the results and said parents were fuming.
“Basically one in three qualify,” he said. “How could more kids qualify than last year? Even adults have to do double takes on the questions [because they’re so difficult]. It doesn’t make sense.”
Neither Pearson nor the DOE responded to questions about the results.
Read more: http://www.dnainfo.com/new-york/20130430/new-york-city/parents-demand-answers-after-snafu-threatens-gifted-talented-spots#ixzz2SC3ftHlm
Read more: http://www.dnainfo.com/new-york/20130430/new-york-city/parents-demand-answers-after-snafu-threatens-gifted-talented-spots#ixzz2SC3Y5o9T
I wanted to share this story from DNAInfo.com. CLICK HERE to read read the article on their site and to read other related stories.
NEW YORK — The new gifted and talented test isn’t just tough for 4-year-olds — it’s also stumping their parents.
The Naglieri Nonverbal Ability Test — which preschoolers have to ace to win one of the city’s coveted public gifted and talented kindergarten seats for fall 2013 — quizzes kids on their spatial reasoning skills, asking them to analyze complex geometrical patterns.
Parents across the city are helping their kids prepare — and must decide if they will request a test by early November — but many said they first had to teach the material to themselves. “I don’t know if I would have been able to figure it out on my own,” said Monica, a Lower East Side mother who is using a test prep guide from TestingMom.com. “If you’ve never done it before, you can find it very difficult.”
The NNAT is an abstract test that asks kids to look at a series of complicated shapes and figure out their pattern, so that they can fill in the missing piece. To solve the visual riddle, the young test-takers have to pay attention to the size and color of the shapes, how they are oriented and how they relate to each other. “There are some questions that many adults might not even be able to answer,” said Janet Roberts, director of education and product development at Aristotle Circle, a test prep and admissions company. “It requires a lot of patience and a certain level of endurance as well. ”
When parents first see a sample Naglieri test, with its rotating triangles and checkered squares, “They typically panic,” Roberts said. “[There’s] a little bit of hysteria.” Aristotle Circle’s NNAT preparation book sold out four times faster than any of the company’s other books this fall, Roberts said.
The Department of Education decided to start using the NNAT this year to replace the Bracken School Readiness Assessment, which covered basics like shapes, numbers and colors. The goal is to test children’s true intellectual ability, rather than their learned knowledge — and to make the test harder to prepare for after more than 1,600 preschoolers earned the top score on the entrance exam for this fall, the DOE said earlier this year.
The glut of top-ranking preschoolers left those and an additional 1,000 high-scoring children vying for just 300 kindergarten seats this fall. The NNAT will comprise two-thirds of each child’s score, while the Otis-Lennon School Ability Test, which examines students’ logic skills, will make up the rest.
Radmila Gordon, a Coney Island resident, has been researching the NNAT for months so that she can prepare her 4-year-old daughter Alisa for the test. “It’s very difficult,” Gordon said. “I don’t know how 4-year-old kids are going to do it.” Alisa is good at puzzles and breezes through the easier questions in the NNAT practice guides, but as soon as the problems get harder, she loses focus, Gordon said.
Ella Sidorenko is having the same challenge in working with her son Max, who is just 3-years-old and will be among the youngest children in his class when he starts kindergarten next year. “The more difficult the questions become, he gets frustrated and starts crying and says, ‘I can’t do it,'” Sidorenko said. “I don’t know if it’s fair for the children.”
Test preparation experts recommend that parents start by teaching basic pattern recognition concepts with hands-on exercises, using puzzles and building blocks. Then kids can gradually move onto more complicated questions in workbooks.
Practice is very important, especially to ensure that kids understand the format of the test and what they are being asked to do, said Karen Quinn, founder of TestingMom.com. “If a child walks in absolutely cold and sees one [of the complicated pattern questions] for the first time, I would say it’s probably too hard for most 4-year-olds,” Quinn said. “Some would [be able to do it], but others would look at it and it would make absolutely no sense.”
Before even explaining the content that will be on the test, parents should make sure their children understand the idea that for every question there is just one right answer, and the kids should try to find that answer, Quinn said.
Bige Doruk, founder of test preparation company Bright Kids NYC, teaches children strategies like breaking down each question into parts and eliminating wrong answers among the multiple-choice options.
“They’re hard because they’re very visually confusing,” Doruk said of the NNAT questions. “There’s a lot going on.”
While Doruk said she has spoken to many parents who are upset about the harder test, she thinks it’s a good way to identify the children who are truly gifted. “We expect a lot from 4-year-olds in New York,” Doruk said. “The idea is that this is not for all kids. Not every child is going to do well.”
Parents who want to apply for a gifted and talented program for the fall of 2013 must submit a Request for Testing form by Nov. 9. The tests will take place in January and early February, and parents will learn their child’s score in April.
Read more: http://www.dnainfo.com/new-york/20121024/new-york-city/new-gifted-talented-test-so-hard-it-even-leaves-parents-stumped#ixzz2AwS2DJKy
As you may have heard, the NYC Department of Education (DOE) is announcing changes to gifted and talented tests this week. The Naglieri Nonverbal Ability Test® (NNAT®2) is expected to replace the Bracken School Readiness Test® as one of two tests given for G&T qualification. Now, the NNAT®2 is expected to count for 2/3 of a child’s overall score, while the Otis-Lennon School Ability test® (OLSAT) will count for 1/3 of a child’s score.
The NNAT has 4 types of subtests. The child uses visual cues to figure out what is being asked. The question asked of the child employs very little verbal explanation. The question will go something like this: “Look at this picture. There is something missing here.” [point to the empty space where there is a question mark.] “Which of these answers” [point to all the answer choices] goes here? [point to the question mark.]
1. Pattern Completion – here, the child the child must perceive a pattern within a large rectangle in which a piece has been taken out and is missing. It is like a puzzle with a missing piece as you can see with the blue and yellow example (below and to the left). On the test, the child chooses between 5 possible pieces to complete the pattern. At the younger levels, this is the most common type of question a child sees on the test. Practicing with real puzzles will helpful for children who will have to answer these types of questions. Answer: D
2. Reasoning By Analogy – here, the child has to use visual-spatial reasoning about the logical relationships between different geometric shapes which change across one or more dimensions (size, color, number, shading, etc.) across rows and down columns. These are most often delivered in 4- or 6-box matrices (our example below and to the left is a 4-box matrix). The youngest children get these types of questions. Doing these types of practice questions will be helpful to children because it will allow them to see the many ways shapes and figures can change in analogous ways (i.e. going from large to small, black to white, right-side up to upside down, facing left to facing right, etc.). Answer: 4th figure above the bubble.
3. Serial Reasoning – here, children must recognize sequences of shapes (circles, squares, triangles, and other more complex figures) that change across rows and columns in a 9-box matrix. Working with patterns will be helpful to children here. I’d suggest working with coins, beads or Fruit Loops and creating patterns that your child can recognize and help extend. Answer: B
4. Spatial Visualization – here, children must determine how two or more designs would look if combined and in some cases rotated. These are the hardest types of questions and are more prevalent in the higher grades. Practice questions will help tremendously here – so will working with Origami in the real world! Two examples of these types of questions follow. The first question asks what figures would look like when combined. Answer: C
The second Spatial Visualization question asks what a figure will look like after the extra piece is folded over. These questions can get very complex in the later grades where there are folds and rotations that occur in the problems. Answer: D
There is one thing that I am a bit unclear about. The NNAT2 is a test that officially starts at age 5 (it’s designed for 5- to 17-year-olds, Kindedergarten to 12th grade). We will have 4-year-olds taking the test in NYC. So perhaps they are giving a test designed for 5-year-olds to 4-year-olds. Let’s see if this is clarified in the announcement the DOE makes (hopefully) this week. We shall see.
For over 1,700 practice questions for NNAT®2, visit www.TestingMom.com.
We held an event last night in NYC where I took parents through the different types of questions children might see on the NNAT®2 (or Naglieri) test and OLSAT (or Otis-Lennon School Abilities) Test. The OLSAT has been the test given in NYC for several years, along with the Bracken Test. However, we believe that the NYC DOE is replacing the Bracken with the NNAT2 for gifted and talented qualification to its District and City-wide G&T programs. It has been announced in the NY Times, but the DOE hasn’t officially confirmed it. We expect the announcement to come in October. We do feel this will be the test, but I can’t say 100% for sure until I see it in writing on the DOE’s website.
I put together a handout with several practice questions for the NNAT2 and the OLSAT (the other test given to kids in NYC) so parents could see how challenging these tests can be. To do do well on the NNAT®2 and on certain aspects of the OLSAT, a child has to have strong visual-spatial reasoning skills (the ability to think using shapes and figures rather than words). I have always struggled with these abilities and so, even as an adult, these questions can really confuse me. Although I helped to write our practice questions for this test, I can still got confused when trying to solve them. I even made a mistake on one of the answers I selected for our handout. I wanted to share with you some of the points of my own confusion so that you will see how easily the same thing can happen to a child who is trying to figure these out.
This is called a “pattern matrix” question on the OLSAT (or “serial reasoning” on the NNAT). You’ll find questions like this on the OLSAT for children as young as kindergarteners. They begin for children in first grade on the NNAT. The child is asked, “What belongs in the empty box?” He or she must find a pattern that is occurring in both the rows and columns and determine what figure goes in the empty box to complete the pattern. I noticed that I had said that the second figure was the answer, but when I reviewed the question before my talk, I couldn’t see why the fourth answer might not be right. Finally, my partner reminded me that the center black line doesn’t change in the pattern so the second figure must be correct. Once you “get” that, it seems obvious. But until you see what the rule is, a person can really feel stumped.
This is a Reasoning By Analogy practice question for the NNAT. You find this type of question on so many tests given to young children. The child must determine the relationship that is taking place between the first and second boxes so they can determine which answer belongs in the bottom empty box. As you can see, I got this one wrong when I put the handout together. Looking at it now, knowing the answer, it seems so obvious that B is correct. The relationship happening is – # rectangles on the left, 1 more circle on the right. Since there are 3 rectangles on the left there should be 4 circles on the right. I suppose that when I was writing this handout, I lost track of what the relationship was and saw the rectangles on the left, realized there needed to be 4 on the right, but chose rectangles instead of circles. I knew I was supposed to choose circles, but I guess that I just forgot that in the moment when I selected the answer. I wanted to share this with you in to show you how easy it is for even a grown-up to get tripped up on these questions!
For 100 free practice questions, visit www.TestingMom.com.
These days, children are regularly tested to get into private school or gifted and talented programs. If you live in the NYC, the ERB or WPPSI®-III test is given for private school admissions. The OLSAT® and NNAT®2 is given for gifted & talented qualification. The Stanford-Binet is given for Hunter College Elementary qualification. But even if you don’t live in NYC, children around the country are being tested for private school admissions and gifted and talented qualification. The CogAT® Form 6 and Form 7 are commonly given, along with the ITBS® (Iowa Test of Basic Skills), the KBIT®-2 and more.
There is so much you can do to prepare your child for these tests at home. If you have some time, I highly recommend that you pick up my book, Testing For Kindergarten. It is full of games and activities that are fun for your child to do and preparatory for the most common tests that young children are given. IQ Fun Park is a wonderful game you can play with your child that will prepare him for testing. It’s actually a test prep kit, but to a child, it’s play. If you would like your child to do practice questions for the most common tests children are given across the country, TestingMom.com offers thousands of practice questions that she can work with either pencil to paper or even as games.
When you do work with your child to prepare for testing, keep it light and fun. Never talk about it as test prep. Call it special homework, brain teasers, or puzzles. Give your child brightly colored stickers for doing a good job. We find that children generally love doing this special work with their parents – it’s a bonding experience. And it is great for you because you get to see what your child is good at and what they need to work on. Once you see that, you’ll want to work on the things that give you’re your child trouble outside of the test prep situation. So, for example, if you learn that your child doesn’t know his letters or numbers during test prep, you’ll want to play fun games with him to teach him those things.
For 100 free practice questions, visit www.TestingMom.com.
Visit IQ Fun Park!