November 18. Another new conference was on the schedule for me: Koli Calling. Unfortunately, this conference had to be held online. From Fenia, I had heard about this relatively small, single-track conference. It would be a good target for networking. We'll have to see whether this still holds during the three online conference days.
So, Koli Calling is single track. As we're one timezone away from Finland, where Koli Calling is held in offline versions, I was in luck regarding the timeslots. But, as they have attendees from all sides of the globe, they also took into account US and AUS timezones. As a result, each submission was presented twice. This means that the schedule I present below was the order I chose, and does not necessarily match with other attendees.
I started my morning with a poster session. The most interesting poster/abstract in this session to me was a work on teaching debugging by Olli Kiljunen. In his research, he will work on adding debugging instructions to IDE's, to teach students to fix their code errors. As error messages are not always descriptive, he proposed to take the student by the hand in a step-by-step process. We discussed some of the things he might take into account when designing the front-end.
The first paper talk I attended was in a session on Computational Thinking. The paper was about identifying Three + 1 perspectives on Computational Thinking. The authors had first conducted a literature review to find papers that mention CT, and to explore their definitions. Then, they discussed their findings with 8 senior researchers in the area. Their first step was to decide on five aspects of CT:
Algorithm, Abstraction, Modeling, Simulation and Implementation.
With these five aspects, they determined there were three perspectives from which CT was approached.
Then, the paper watching was interrupted by teaching duties.
A couple of hours later, the program was restarted with session C, Help Seeking and Situated Learning.
The first paper in this session was called Reading between the Lines: Student Help-Seeking for Unspecified Behaviors by Jack Wrenn and Shriram Krishnamurthi. In their programming course, they applied an automated TA (Examplar) that helped with evaluating test cases. They were wondering what types of questions students had when they also had this feedback.
They found that students still had input-output questions, even though Examplar should capture answers to such questions. They found that these questions were about cases with unspecified inputs and underspecified outputs, which unearthed misconceptions about property-based testing.
In the question session, Shriram mentions that this might a transfer problem, as when the students are nudged in the correct direction, they do apply property-based testing. On Discord, Otto posed the hypothesis that teaching property-based testing before unit testing, might solve part of the problem. I personally don't know anything about property-based testing, but I found the method and discussion very interesting from an educational perspective.
The second paper was on Open Source Software (OSS) practices in CS2. How can you apply OSS in CS2 to maximize benefits and minimize cost? The authors came up with four frameworks, derived from upper-level courses where OSS is already integrated.
Then, there was another break with two posters. The first was on Parsons problems for Regex, which sounded interesting but not really relevant for me. The second was on an autograder for SQL. As I had to step away from the computer, I posted a question in the poster's channel for asynchronous communication. This led to a nice chat during the next paper session.
The last paper session of the day was on student perspectives, and included three papers: student perspectives on event listeners, educator perspectives on the Main method, and student thoughts on buttons. The first and third papers had similar setups, but I'll discuss them in the order of the session nonetheless. The papers were of interest to me as they discussed conceptions on different topics, which gave me ideas for future research.
Paper 1: Student perspectives on event listeners. The research question was: How do students understand Event Driven Programming concepts? Students took a questionnaire, where they could describe their interpretations either in text or submit concept maps. The authors found that there was some confusion about event handlers and listeners regarding the relationships between the event and the subprogram. Regarding the runtime behavior, most student had misconceptions regarding where the event listener 'lives'. The most apparent confusions were:
Paper 2: An educator's perspective on the main method. From an educational standpoint, can you learn to write code without learning to read code? Students would like that, but they probably can't. As with almost all professions, you need an apprenticeship before you go out alone. So in the classroom, do we expect students to learn to read from the slide? Simon showed us both a novice and expert's gaze over the same piece of code. The novice read top to bottom, the expert went instantly to main and read it carefully. This is most likely the more efficient reading behavior.
The authors' recommendations for programming teachers are to not rely on the student to pick up what they need to learn, but to give explicit guidance. Teach them that program code is not (necessarily) linear. Above all, following the order of execution is more useful for understanding the code than following the order the text appears in.
This made me reflect on our expectations in the introductory database course. Today, we taught proofs using Armstrongs rules, and a student asked me whether there was a guide they could follow. And no, there is not. They should have learnt how to write proofs, this is a prerequisite for the course. But perhaps, they should read more, before they write...
Paper 3: Students' thoughts about buttons. This study had a similar setup to paper 1, with a course survey asking about the students' perspectives on an aspect of teaching. The paper had four research questions. First, they asked the students what a button was. The authors received descriptions of three categories: vague (it, thing), interaction (functionality, user interface), appearance (area, visual).
The second question was what happens when pressing start on a touch-screen in a mall. Now, this is impossible to know, but most likely, something will begin or start. The authors received replies of varying decrees of concreteness, up to participants suggesting what the screen might be for. On misconception they found was that the computer wakes up.
The third question was about an ambiguous case: what happens when you press a save button in an online form? Most likely, the form is either saved or submitted. Again, the participants speculated about what might follow from pressing the button, including the user having an account, or the sending of a confirmation email.
Finally, the authors had a question that included a code snippet, and asked what would happen when this button was clicked. Interestingly, some students interpreted the next actions purely from the text on the button, instead of from the code. In the descriptions, the code was hardly mentioned.
The authors concluded with two important findings: 1. The button is an actor (it can be broken, it fulfills a functionality) 2. Interpretation is affected by context, such as our own experiences, its location and labels.
The day ended (late) with a Keynote by Sue Sentance. The title of her talk was: Teaching computing in school: Is research reaching classroom practice? She started with a 30-second version of the talk, which was very helpful.
She started by asking: What is the goal of CSEd research?
CSEd research is a broad field, in which Sue sees four areas: pedagogy and assessment, curriculum and theory, society and ethics, tools and resources.
It is a young field, but we are receiving money for research. Then, is this research reaching the classroom? It seems that in general, the answer is no. First of all, priorities between what researchers are researching, and what teachers need, are not aligned. And, even if the research is aligned, the type of knowledge is not aligned: propositional versus procedural, and generalized versus context-specific. Finally, even if they access research, teachers need to interpret it and embed it in practice.
To change this, we need to focus on the next question: What is the value of research in education? Sue approached this from four perspectives: researcher, teacher, school, and policy maker.
If there is enough time, and researchers and teachers are in touch, there are various types of activities that can be undertaken. As researchers, we can work together with teachers to bring research to the classroom, or we can use school data for research. However, the most accessible and common activity is to produce your research content in various formats, including the development of lessons.
Ways in which research can be adapted include transfer, translate and transform. Transfer means to directly adopt. Translate means to take the materials and apply them to a different context. To transform materials, you take the basic concepts and make it your own.
A good reflective question by Sue was, have you have adapted another researchers work in your teaching? I personally have not, which may be explained by my limited teaching experience (and the fact that I'm never responsible lecturer). But, I also don't know of any materials that I might like to transfer or translate into our education. This was a good reminder to me to keep my eyes open.
After the keynote, there were some good discussions about funding, career focus, and the accessibility of higher education versus K-12. The fact that I do research on the topic that I also teach is convenient: I can test my hypotheses with my own students. But we should be careful that we disseminate our work beyond our own student population, and to not work in a bubble. I hope the work I present on Friday will help us do exactly that.