Learning Seeds, Inc.
- For-profit, including B-Corp or similar models
Playground Whisperers & The Teachable Moment
Learning Seeds (www.learningseeds.com) is a women- and educator-led social benefit company dedicated to improving the life-long educational outlook of children facing behavioral and social challenges in their early years.
Our team works directly with neurodiverse children in their real-life social environments. Colleagues call our team “the playground whisperers” for our ability to turn challenges into opportunities and make dramatic improvements to a child’s interactions with peers. By seizing teachable moments to apprentice early learners toward positive engagement, we can help them expand autonomy and build authentic social connections.
Engagement
We believe that engagement – including exploration, interaction, and pretend play – forms the foundation of early learning. Engagement is critical to the development of executive function, metacognition, group identity, self identity, and social emotional learning. Once a child has a positive experience exploring, interacting or pretending, they tend to do a lot more of it and their natural curiosity and love of learning drives progress.
The central problem for learners who struggle to engage is that they tend to need the most practice with interaction but get the least, because their behavior leads to withdrawal, isolation, and exclusion from experiences in their early learning settings. These children fall behind in the development of the “learning to learn” skills that set a child up for success in kindergarten and beyond.
Our mission is to accelerate early learners’ acquisition of executive function, metacognition, group identity, and self identity, by guiding them toward greater engagement with environment, objects, peers, and community.
Our vision is that all children, regardless of learning differences in early childhood, will arrive at kindergarten ready to make the positive social connections that are so critical to early learning: to be part of the group, to explore, interact, and play with peers. We work toward the day when the one-in-six children who qualify for a social/communication diagnosis will have the identity awareness and sense of belonging that their typically-developing peers have as they enter kindergarten.
From Our Playground To Playgrounds Everywhere
Teachers routinely ask our team of experts for techniques to support inclusion. We are eager to scale the influence of our expert practitioners so we can promote best practices without promoting practitioners away from children.
The Milestones of Engagement Screener – for which we seek the assistance of LEAP Fellows – is built upon the framework that our own practitioners use to observe children and track progress.
Trellis is our teacher development software that captures our expert educators’ decisions and provides less-experienced teachers with access to bite-sized, personalized strategies that respond to immediate daily realities.
We believe that assessments must be tied to practical strategies, and that these two must work together as parts of a system that realizes change for both child and adult. Between the Milestones of Engagement and Trellis, we have developed such a system. It is helping us change children’s lives in our in-person practice, and with LEAP’s support we can bring this system to educators everywhere.
- Pilot: An organization testing a product or program with a small number of users.
As VP of Growth, Paul Joseph leads Learning Seeds’ campaign to scale our expertise: to reach children we’ll never meet by guiding teachers we won’t train in person.
In coordination with the other members of the Learning Seeds leadership team, Paul establishes growth goals, metrics, and plans to bring the Learning Seeds lens and methodology beyond our Boston-based in-person practice.
Paul drives the team’s strategies for the following:
Product development, including research
Funding, including grant applications
Marketing
Partnership with research institutions and other EdTech vendors
Paul supervises Learning Seeds Social Impact Fellows working on projects related to the items above. In summer 2023 Learning Seeds will host an MIT fellow working on our product development team to help us develop the AI that will power the next iteration of our Trellis professional development tool. We will also have a marketing intern helping us with content for a social media campaign.
The Learning Seeds team is well-suited to collaborate with LEAP Fellows in the coming year for these reasons:
LEAP Fellows can be assured to receive a high level of attention and interest from the Learning Seeds team because the project in question will be an immediate priority for us in the coming academic year. During the 2024 school year – i.e., during the two LEAP Fellow 12-week sprints – Learning Seeds will be deeply involved in developing, testing, and iterating the Milestones of Engagement screener for which we seek LEAP Fellow involvement and input. This timely overlap means that Learning Seeds leadership, education team members, and LEAP Fellows will be focused on the same work, coordinating as a single team. Most importantly, it means that Fellows will have real-life cases from the field to examine and live opportunities to test and survey.
Learning Seeds regularly collaborates with fellows. As noted in a previous response, this summer we will host a MIT PKG Fellow who will support technical development for our teacher training software solution. This will be the second MIT PKG Fellow to work with Learning Seeds on software development.
We’ll also have a fellow working on content for social media marketing in summer 2023. Social impact fellows have commonly gone on to join the Learning Seeds team in more committed roles; two members of the current leadership team started with Learning Seeds as SIFs.
We’re a strong company with a clear purpose, led by a committed team.
Learning Seeds (www.learningseeds.com) has a thriving in-person practice in Boston. Our team of early childhood education specialists works with neurodiverse preschoolers at school and on the playground where they come in contact with their peers, to help them develop positive social habits. We have an extensive waitlist for our social coaching services and staff training; we are on a mission to make our sought-after expertise available to as many teachers and children as possible.
Founder and CLO Erica Key (Chicago Public teacher and trainer, University of Chicago, MassChallenge Finalist, MIT/Forbes/Techstars advisor) together with Education Team Lead Terri Foster (MS, OTR/L, BC and Tufts, Boston and Learning Without Tears trainer) head a team of specialists with more than 100 combined years of specialist teaching experience. Their decades of direct work with children in urban public and elite private settings and extensive experience running staff trainings, informs the development of our Milestones of Engagement framework that is the focus of the LEAP Fellow collaboration.
Emily Reddy (MS-OT, afterschool program manager) manages Learning Seeds operations and is also on the Education Team. Emily owns the drafting of the Milestones of Engagement screener – as do the other members of the Education Team, she uses the MoE framework with her own students.
Dr. Jason Key, CTO (University of Chicago, Harvard, Head of Research Computing) is the Learning Seeds head software engineer and DevOps manager. He leads a team of programmers and has built several research-supportive infrastructure systems serving both private industry and academic needs simultaneously.
Learning Seeds Milestones of Engagement is an Early Childhood Screener for Engagement with Environment, Objects, and Peers
We need an engagement assessment
As discussed in our mission statement, we believe engagement forms the foundation of early learning, when children are still developing executive function, metacognition, and perspective taking.
But engagement is difficult to measure. There are behavioral assessments and SEL frameworks that identify behaviors that are preferred or not preferred, and assess whether children exhibit them in certain settings.
But assessments such as HighScope and CASEL are not well-suited to early learners who are still developing metacognition. Often the most basic competencies measured in these assessments assume an understanding of consequences, temporal processing, and self-awareness.
Play is losing ground in early childhood education. Data-driven programs continue to give more weight to areas like literacy and math skills, where performance is much easier to measure. This is especially the case in lower-SES communities, making it an equity issue: without data devoted to the delivery of rich play skills, play will continue to be diminished in all but the most elite, private early childhood programs. We hope that a dedication to scaffolded exploration, interaction and play will also protect these values as a critical part of every 3-5 year old's experience in group learning. We believe that a sound measure of engagement will make a substantial difference.
The Clinic vs. the Playground
Offices of pediatricians and specialists are not ideal places to assess or guide a preschooler’s social development. Adults can coach social skills best in the real world, at school and on the playground, among peers.
The 30- to 60-month Assessment Gap
As a child approaches kindergarten, it becomes increasingly critical to recognize behavior that might limit their ability to engage with peers. But after 30 months, it is much less common for children to undergo standard assessments to identify developmental concerns. Signs can be subtle during this age window – withdrawn behavior does not grab attention, but often becomes the aggressive, obstructive behavior that leads to preK expulsion or other exclusion.
What to Do
Widely-used assessments do not focus on what can be changed in a child’s social abilities, and how exactly to change it. They typically fail to provide teachers and parents with practical, personalized, effective recommendations of what can immediately be done to guide a child demonstrating a specific behavior. Short lists on hand-outs of two or three pages provide only general advice.
Solitary Social Learning?
Programs that involve a child’s solitary play on a hand-held device can provide data about some learning skills, but cannot reasonably be used to assess social skills, and certainly cannot coach these skills.
The Preschool to Prison Pipeline is the larger social problem we can help solve. The immediate problem we address is providing effective social skill support for early learners who need it the most but get it the least because their behavior leads to isolation and exclusion.

The Milestones of Engagement screener and progress monitoring tool provides insights that guide teachers and caring adults as they apprentice children toward greater engagement with their group learning setting.
Direct Ties to Practice
In building our framework, Learning Seeds has taken a novel, practical approach to categorizing behavior. After four years of sorting and categorizing the day-to-day decisions we’ve made in the instruction of children who are struggling with interaction, we’ve concluded that nearly every instructional decision we’ve made in service of expanding a child’s engagement in their group has fallen into one of eight categories. We’ve named those categories the Milestones of Engagement.
The questions of the screener are oriented toward experiential goals for the child in a group learning setting, with an immediate eye toward what specifically an educator can do to coach the child toward them (which is the role of our Trellis recommendation engine program, noted at other points above, and is not the piece we seek LEAP support for).
Examples of questions:
“Does the child sit in proximity to peers with their body facing materials?” (Environment Category)
“Does the child settle into engagement with an activity, or do they move quickly from one activity to another?” (Solitary and Parallel Activities Category)
“Does the child say no effectively, or refuse by ignoring or getting upset?” (Transactions Category)
Moving A Child Toward Metacognition
In a chapter dedicated to the Learning Seeds Milestones in her upcoming publication Equity and Diversity of Learners, Dr. Karen Stagnitti, Emeritus Professor in the School of Health and Social Development at Deakin University, describes our framework as
“especially beneficial for students who do not yet have mastery of the complex skills required for reflecting on one’s own past behavior to plan for improvements in the future. This framework can help educators identify where a child is naturally engaged and what experiences can help a child apprentice in a more inclusive and prosocial experience while the child is given scaffolded experiences in the engagement experiences that lead up to the development of mature SEL skills including: metacognition, temporal processing, perspective taking, and executive functioning.”
The eight Milestones categories progress from basic physical experiences to complex social experiences, and finally to experiences of metacognitive conceptualization. At this level the child can conceive of themselves as a member of the group; reflect on the past in order to plan for future interactions; consider cause and effect; and apply the Golden Rule.
Learner Variability
Our framework, our approach, and our tools align well with the learner variability research agenda of the Jacobs Foundation as described in their overview.
Just a few examples of shared areas of interest:
“What skills prepare students to learn in future contexts, how do they interact with each other and the context, and how do we teach and measure those skills?”
“How can learning experiences adapt to individual children’s academic and nonacademic states and foster their development?”
“How can new types of data provide insights into multidimensional learning, development processes, and outcomes?”
- Pre-primary age children (ages 2-5)
- Persons with Disabilities
- Level 2: You capture data that shows positive change, but you cannot confirm you caused this.
Foundational research:
Among the key academic influences on our thought and approach are:
Lev Vygotsky’s Sociocultural Theory of Cognitive Development
B.F. Skinner’s Behavorial Psychology
Albert Bandura’s Social Learning Theory
David Kolb’s Experiential Learning
We have examined, compared, and in several cases correlated our Milestones of Engagement framework with other frameworks of cognitive and behavioral development such as:
CASEL
HighScope COR Advantage
Head Start Preschooler Approaches to Learning Framework
CLASS
Habits of the Mind
Sesame Workshop Global Learning Framework
Tools of the Mind Play and Executive Function (Budrova & Leong)
Signature Strengths (Martin Seligman)
Seven Essential Life Skills (Ellen Galinsky)
Social-Cognitive Categories for Play Observation (Christie and Johnson)
Complexity of Children’s Play (Sara Smilansky)
8 Stages of Psychosocial Development (Erik Erikson)
We have examined and compared our Milestones screener to a number of the norm-referenced, criterion-referenced, psychometrically-sound screeners and assessment instruments most commonly used to assess behavioral and cognitive development in preschoolers, including:
BASC-3
VP-MAPP
WPPSI-IV
Brigance IED-II
VABS-II
CIBS-R
NICHQ Vanderbilt Assessment Scale
ADOS-2
Formative Research:
The Learning Seeds education team has been using the Milestones of Engagement framework – or predecessor iterations – for the past several years to assess the children they work with and to develop instructional plans. It is the lens through which they conduct successful transformative work with the neurodiverse preschoolers in their care, and the structure of the winning methodology that keeps their services in high demand locally. The shorter-form screener for which we are seeking LEAP support is a relatively new manifestation of the design; it remains under development.
The Learning Seeds education team got a recent indication that they are succeeding in making the framework accessible to users outside of the team that built it when a recent new hire to the team was able to successfully interpret and make use of the framework, in line with expectations of the team, when working with a client child.
It should be noted here that positive attention from qualified outside sources has also provided informal validation for our solution: in a response above, we quoted Dr. Karen Stagnitti, Emeritus Professor in the School of Health and Social Development at Deakin University, who included an entire chapter about the Learning Seeds Milestones of Engagement in her upcoming publication Equity and Diversity of Learners. But it is of course understood that this does not constitute empirical evidence.
Foundational Research into other frameworks, assessments and screeners, as described above, has led to us to the following conclusions:
We are able to at least partially align and correlate our framework with many commonly used cognitive and behavioral frameworks, which would indicate that we could find adopters of our model among those already using similar approaches.
We find that some of the most commonly embraced assessments – including CASEL and Habits of the Mind – have relatively little overlap with ours because they largely assume metacognition: they are geared toward children with some degree of self-regulation and executive function. I.e., they are best suited to older children, or children who are not experiencing developmental delays.
This point was touched upon in our response to the “problem we seek to solve” question above. We believe it is a distinguishing feature of our framework and our screener that it thoroughly accounts for children with limited connection to their immediate environment, and is built to guide them toward the levels of self-awareness that are assumed in some other assessments.
Our screener has a long way to go before it will have the statistical foundation that the commonly used screeners listed above have. We need help addressing questions of norm-referencing, criterion-referencing, and psychometric soundness. We need guidance in improving our instrument so that it will earn the confidence of users, administrators, and researchers.
Formative research, consisting of our own regular use of the framework to screen, observe, and track the progress of children in our care, has confirmed to our team its overall effectiveness. Changes and improvements have been and continue to be made, but the results that we consistently get – and the responses of grateful families – are proof to us that we have something that others can greatly benefit from.
Our own four years of successfully using the Milestones of Engagement (or predecessor iterations) as a screener and observation instrument constitute the only evidence we have for their validity. While this is meaningful evidence to us – and likely to many of our clients – we fully realize that:
We do not have concrete evidence to demonstrate that it is the Milestones of Engagement that have led to our successes, as opposed to other elements or characteristics of our instruction
We do not have proof of the validity of individual pieces of the framework; we cannot be sure of the causal relationships between any specific piece of the framework and any particular developmental success a child experiences
Our screener is not norm-referenced, criterion-referenced, psychometrically sound, or otherwise up to standards of validity required in the field
We have not done clinical studies to demonstrate validity
We do not have a statistically significant volume of users
A proper evidence base is important to us because we want:
empirical confirmation of the validity of our solution
to know where the validity of our solution is lacking, so we can make necessary improvements
to be able to satisfy the requirements of potential clients, buyers, or funders that we may approach to help us scale our impact
Now is a great time for LEAP Fellows to work with Learning Seeds because, as noted in the question above about our team:
During the two LEAP Fellow 12-week sprints, Learning Seeds will be deeply involved in developing, testing, and iterating the Milestones of Engagement screener for which we seek LEAP Fellow involvement. This timely overlap means that Learning Seeds leadership, education team members, and LEAP Fellows will be focused on the same work, coordinating as a single team. Most importantly, it means that Fellows will have real-life cases from the field to examine and live opportunities to test and survey.
What exactly are the standards that screeners and assessments such as the BASC-3, VP-MAPP, WPPSI-IV, Brigance IED-II, and VABS-II have met to gain acceptance; and what steps would we have to take to meet these standards?
What are some identifiable indicators by which we can gauge our success as we build this innovative measure of student engagement – qualitative markers that can show us that we are on track; red flags; or potential blind spots?
What are some empirical tests we can conduct that will be of a statistically-significant but reasonable scale for our current stage of product development?
- Foundational research (literature reviews, desktop research)
- Formative research (e.g. usability studies; feasibility studies; case studies; user interviews; implementation studies; pre-post or multi-measure research; correlational studies)
- Summative research (e.g. correlational studies; quasi-experimental studies; randomized control studies)
Desired outputs:
A thorough review conducted by the qualified experts of the LEAP Fellow team to match our own review of frameworks and assessments, as described in our response to the “what research/studies has your organization conducted” question above:
Identify the leading assessments and frameworks used in the early childhood education space that we should be comparing our solution to
Compare and correlate our solution with these assessments and frameworks, or highlight the critical differences
Recommend how we can meet the validity standards these others have met
Observation of the education team and/or pilot users employing the screener in live cases, and write-up of recommendations on how to improve
Proof of concept draft to gain confidence of clients, buyers, researchers, and funders, particularly SBIR grants
Draft design of an empirical study to demonstrate validity of the screener as a tool to measure a child’s engagement in a group learning setting
To the extent possible, we would immediately take the recommended steps to validate our solution:
Embark on the steps identified to elevate our solution to the same level of standards as other instruments examined by the LEAP Fellow team
Implement the design of the empirical study to demonstrate validity
We expect the LEAP team’s analysis of other screeners, assessments, and frameworks would identify for us not only some chief competitors, but some possible collaborators with whom we share approach and perspective, and each have something to offer one another.
This same analysis would also point us in the direction of possible adopters of our solution: if someone is using another screener or framework that is similar to ours now, they might be receptive to the improvements that ours offers.
We expect the observation write-up will suggest modifications and improvements – to the screener as an instrument; to how the screener is administered; or perhaps to the eight-category framework upon which it is built. We will immediately consider recommended changes and act to make changes where these make sense to us.
The proof of concept will be put to use in grant applications – particularly an SBIR grant application, among others.
Material help from a team of MIT Fellows to strengthen the evidence base of our Milestones of Engagement screener will provide an immediate boost of confidence: for our own team, for our current clients, for prospective new clients, and for funders.
If significant flaws are discovered by the LEAP team during the project sprint, we would certainly spend time in the short-term addressing those flaws. As noted above, we will move as immediately as we can to follow the team’s recommendations to strengthen our evidence base.
Ultimately, a stronger evidence base will facilitate the scaling of our expertise :
we will be in a much better position to approach buyers, commercial partners, and research institution partners when they can see solid evidence supporting our solution, or they can see that we have a concrete plan to establish that evidence, created with the help of MIT Fellows.
Whether SBIR or other funding, a well-crafted proof of concept is certainly likely to attract more financial support.
In these ways the MIT LEAP program will help us extend our reach and widen our impact. It will lend material support to our mission of bringing the transformative work of our Boston-based team of educators to children and teachers everywhere.