How Does This Work, and Why Does It Work This Way?
Testing and evaluating software and services remains an underappreciated and misunderstood skill. This session will address this reality on multiple fronts, including:
- How can we identify the questions we should ask about a service?
- How can we assess the claims and promises made in both marketing materials and legally binding policy documents?
- What are easy and affordable ways to do meaningful privacy and security testing?
From the audience — and before the session occurs — we will collect questions and informations from people in schools about what they want to know about the technology they use, and what existing barriers exist to getting good answers.
In many ways, this is an old conversation. And, given the continuous increase in data incidents — to say nothing of reckless uses of what passes for "AI" in education — it's a conversation that still needs to occur.
Conversational Practice
Prior to the session, I will create a list of openly licensed and freely available resources that can be used to evaluate software and services, including AI tools.
During the session, this resource will be augmented and improved. The updated version will be re-shared, and if there is interest I will recruit and support multiple co-authors to maintain the resource over time.
No comments have been posted yet.
Log in to post a comment.