Koli Calling 2021

Day 2 - Friday November 19

Posted by Daphne Miedema on November 19, 2021 · 13 mins read
tripreport virtualevent contribution

Morning

November 19, and I was all ready for day two. The first session I attended was on Block-based programming and accessibility. It contained two papers: the first on a Scratch challenge and the second about instruction in native languages (non-English).

The first paper was A Scratch Challenge: Middle School Students Working with Variables, Lists and Procedures. This paper considered the requirements for middle-school CS Education in German-speaking Switzerland. This considers various competencies, one of which is that middle school students should be able to develop executable and correct computer programs that use variables and subprograms.
The authors set up an online challenge in Scratch, in which the students (who had no programming experience) were divided in groups, and then asked to make any program they liked. The challenge allowed the teachers and researchers to check the level of knowledge of the students.
To make the competencies measurable, the authors focused on three concepts: variables, lists and procedures. They found that the most commonly (correctly) used concept was the variable. The students gave them meaningful names, which indicated understanding. The applications were mostly basic, as expected.
For lists and procedures, these were typically used correctly too, although the application of these concepts in the projects was on average close to what the example programs did. It seems that these concepts were more difficult for the students to grasp, as they did not have as many original executions. However, the fact that the students did use the concepts was, according to the authors, an indication of basic understanding.
I felt that this talk was highly related to Sue's talk yesterday, where she asked us to what extent research makes it to the classroom. In this case, there is direct collaboration, with a high level of transfer (most likely) occurring.

The second paper was English versus Native Language for Higher Education in Computer Science: A Pilot Study. The research was about teaching CS in Pakistan. Their official language in higher education is English, but in practice, students learn English in different grades depending on what schools they attend. Their first language typically is Urdu.
The researchers examined their class' confidence in speaking, listening, reading and writing, in Urdu versus English. They found that the students were more comfortable with Urdu on the oral side of the spectrum, and more confident in English on the writing side. The students are more comfortable asking the teacher questions in Urdu than in English. So, the authors suggest to keep the teaching materials in English, but allow classroom conversations to be in Urdu.
An interesting question from the audience was whether Urdu has words for concepts in programming such as a file or a loop. It does not, so these concepts all have English names.
We have this discussion in the Netherlands too, should we be teaching everything in English? The advantage is the influx of international students (but the university is at max capacity). Another advantage is that it is easy to move between countries, you don't have to reconsider everything you have learnt. However, this does require that a student is confident in English.

Next up was another poster session, this one including my work. First, our video was played, then there was room for discussion. In our first session Fenia and I were visited by Marco Hartmann, who is working on a similar study for measuring misconceptions. We had a very nice discussion of things to keep in mind for both designing the study and running it.

Afternoon

In the afternoon, I attended session G: Block-based programming and accessibility.

The first paper in the session was Promoting Students’ Progress-Monitoring Behavior during Block-Based Programming. The paper considered that successful students apply self-regulated learning techniques such as planning, progress monitoring, self-explanation and reflection. So, if we support other students in such techniques, they might become more successful students.
According to the authors, this has not been well investigated for programming. So, they designed four types of interventions that would allow the students to monitor progress:

  1. A checklist of subtasks/objectives that split up the programming exercise. Students can check off items manually.
  2. This same list, but including feedcack on the items such as completed, correct, incorrect, completed but not checked off yet, and more.
  3. The same list as in 1. but including a progress bar per task, including a percentage.
Now, how do the students use this? The students used the options and liked them, they were also faster in completing the task than those in the control group. But students do not blindly follow the feedback, they still think for themselves. For example, if the student considered the subtask done, but the progress bar said it was not, they would still continue to the next task.
I'm wondering whether the fact that the students were provided with these subgoals, to some extent, also reduces the SRL, because they don't have to come up with their own. The author said that in future work, they were going to experiment with students having to first create their own subgoals, before moving to the programming. On the other hand, the subgoals make the students more autonomous, and thus might empower the students. This is also a good goal to have.

The second paper was Diversifying Accessibility Education: Presenting and Evaluating an Interdisciplinary Accessibility Training Program. The authors discussed READi, an accessibility program for graduate students, where the students work together with an institution on a project. It is a training program including courses, a project, a retreat, a symposium and a workshop. The authors presented the following learnign outcomes: Students became more aware of accessibility barriers. Students understood the importance of involving end-users. Students became aware that accessibility challenges involve an interdisciplinary effort.
This program looks very nice, and I would have loved to have something like this in my own education! I feel like this is something that is completely neglected at Eindhoven. I only became more aware of accessibility issues in the conferences I attended over the past year, where chairs paid explicit attention to it.

Then, there was another poster session. The video of Using data cards for teaching data based decision trees in middle school showed an interesting presentation with really pretty data cards. I went into the poster room to hear more about this project and others from ProDaBi.

Evening

The evening with another poster session, containing my own poster again. In this session we had two visitors. First, Andrew had a suggestion for a colleague who might want to join us. Then, Jan-Mikael came by to learn more about misconceptions.

The following paper session was on meta-research, containing three papers.

The first paper was An Analysis of the Formal Properties of Bloom's Taxonomy and Its Implications for Computing Education. The author argued that Bloom's taxonomy does not really fit Computer Science. The nomenclature can be confusing and is imprecise, and list of verbs do not help as verbs occur in multiple categories. According to the author, the taxonomy is more limiting than helping, which means it might be time to leave it behind.
In our universities program for the University Teaching Qualification, Bloom's taxonomy is a central part of the course material. We learn that to test our students appropriately, we should consider the student's level and take an appropriate level of testing from Bloom. Just throwing the taxonomy out does not seem like a great solution, so for now, perhaps we should bridge the problems identified by the author with our intuition..

The second paper was Wrong Answers for Wrong Reasons: The Risks of Ad Hoc Instruments. The authors first did a literature review to see how researchers in ICER, ITiCSE and Koli Calling perform teaching interventions. Most authors use ad-hoc instruments, which in practice are multiple-choice questionnaires. Now, the problem with MCQs is that you do not know why the student picks an answer. It might be that they guessed (in)correctly, or did not answer at all.
Our study becomes stronger if, in addition to MCQs, we add the option for students to elaborate on their answers. We can then encode the answers with more specific descriptions, such as correct, imprecise, missing, etcetera.
The authors illustrate that they did a study with a pre- and post-test, and found no difference between the two groups when only considering the MCQs. However, when they incorporated the second tier of questions, they did find a difference between the two groups. Their take-away is to question the reliability and validity of your instrument. If at all possible, use a standardized instrument.
In the discussion, Claus mentioned an interesting paper about MCQs. The point was that, if you use explanations, you can have many less MCQs: to eliminate noise you can also just add more questions.
With our proposal of a three-tier MCQ instrument, with double questions for validity, I'm now reflecting on the questionnaire design. Would it be better to just check each misconception with 4 questions instead of 2, and then leave out the other two tiers? I do not think so. For the misconceptions, it is essential to capture the thought process.
Perhaps such advice holds more for cases in which you try to measure differences between groups.

The third paper was The Importance of Context: Assessing the Challenges of K-12 Computing Education Through the Lens of Biggs 3P Model. In interviews with teachers, the author identified various challenges encountered by them: evolution of technology, covid-19, lack of funding, lack of experienced staff, prolems of secondary education, lack of time, lack of resources, and student issues.
Then, the author used these findings to translate the Biggs 3P (presage, process, product) model to Computer Science, by including fitting examples. Then, he suggests to add a fourth P, of policy, to the model to broaden the context.
This talk was again close to Sue's, in that it acknowledged that the practice is to a large extent guided by policy makers, who decide what is allowed and where the money goes.

For me, the evening was closed off with a Virtual Sauna. Everyone in the zoom put on a bathrobe or a towel, and changed their background to a picture of a sauna. We started off by Lauri playing two piano pieces by Ravel, which was very calming. Then, we had some jokes about Computer Science. Simon shared some stories about previous Koli's and other adventures. And we ended with a session of the game Just 1. This is a guessing game in which one person leaves the room, the others are given a word that they need to describe with just one other word. If two people have chosen the same word, this word is out. Then the guesser comes back and sees all the individual words, and has to guess which word the others describe. It was really fun, and I think we were a very creative group as not many of us chose the same words!

At 22:30, it was time to log off and get some sleep in preparation for day 3.