P1: 
I can start, I think that like we started off with talking about the information that is collected, and to focus on transparency there. 
So to create a full picture, like when and where did this report come from, like what kind of metrics are being used, so why are you investigated, why is it ranked that way, and also to stimulate a dialogue about this. 
So also give the decision subject, if I say it right, the opportunity to give feedback on it. 
And that can be when the decision is right, but also when a decision is wrong, like always go for this dialogue to improve the AI system. 
We talked about also getting feedback in two different ways, like the human way, like is it fair that we're getting this feedback and like the right metrics. 
So it's like a more of a technical part, I think, and more of the human part, and that also means that if we are in like this side, the policy and system development, that there might be like a focus group that also thinks about is it ethical, what we're collecting, and the other side might be like the developments make it open AI, open source, and in that regard also to show some history with a success rate to also engage developers to think along, to make it more of a puzzle that we are trying to solve together to make this better. 
And at the end, we talked about risky areas as well, like how can we also communicate with people who are in risky areas upfront that they might be investigated more often or that it's more likely that they will be under the loop, and also during the investigation that there might be a different approach for people who are in these risky areas that they also know like, okay, this is not personal, but it's also because I'm in this area that the metrics are more likely to point my way. 

P2: 
And related to that in an effort to make it a bit more friendly as we talked about the fact that being investigated is something that not only has a negative impact from a financial point of view, but possibly, but also it has a stressful toll on the people, especially I think even if they're not doing anything wrong, so it still feels like something they can deal with. 

P1: 
Yeah, definitely more of this human approach. 

P2: 
So these are the ones that I just like, could remember, memorize like really quickly, but I don't know if you guys, because we talked about way more. 

R1: 
Yeah, I see more post-its and sketches. 
Anything else to highlight? 

P3: 
I'm not sure if you mentioned that, like making the success rates possible and the history like visible is something to have developers engage from like more curiosity point of view. 
Yeah, I think that's that. 

P2: 
I think I might repeat some of the things that you mentioned before, but just in different perspective. 
So indeed, I think transparency was a key thing of what we talked about and see that in different places. 
One is indeed the one we just mentioned now about the history and the success rate related to people that have knowledge and can understand and read kind of like the algorithm and how that works. 
So that we provide a way for seeing how things improve or might not improve over time, but at least indeed we highlighted focus and as you mentioned about it's something we do together rather than it's just something that is being given from the top down. 
So that requires indeed an open approach from any perspective, I would say. 
But also the idea of being more transparent when you are being investigated regardless of even the situation we're talking about, but for example being able to share which data has been collected and why you are being investigated more or independently from other topics can help people to understand why these came to their doors basically. 
And in that specific touch point I was thinking we can offer the opportunity to create new ways for opening the dialogue right away, so either by learning about what data has been collected but then also offering the opportunity to give feedback about it. 
I think another thing we talked about was also hopefully finding ways to create shortcuts that can help improve the algorithm or the AI without having to go through lengthy processes of bureaucracy where applicable. 
Just so we make sure that it's something that is being perceived as something that can improve over time but will not take 10 years before it's as good as it needs to be. 
And maybe we can leave to the bureaucracy process those aspects that maybe relate more with ethics since those are usually more sensible for that reason. 

P1: 
Yeah, I think that that is a nice one, like the feedback that you're collecting and having this dialogue, like you were talking about, then we talked about this development part that people are just like can improve it, that it's open to the public to also improve this. 
But also in this monitoring side there could also be something like if people keep telling about these metrics that are actually not very relevant, that also the system might learn in that way by itself, that it might not be working in the most optimal way. 
There was one other thing that you said. 

P2: 
I don't know if that's the same you just mentioned, but at some point in our conversation we also talked about that there is indeed a system of, there could be a system for automated collection and monitoring of the feedback in the bottom right section. 

P2: 
But it would be very good if that information is publicly available again to ideally everyone so that people that feel like can or want to help improve the algorithm can take that into account or raise the right awareness on the subjects. 

P1: 
Yeah, now I also remember what you said that also made me think about something, is that like this moment where you are being transparent on the metrics that it's also maybe nice to talk about this contestable part. 
So you can also mention that we are trying to improve the system all the time. 
It's just like more of a communication thing, but you can mention it for people to also know like we are not just like accusing you of something and this is the truth. 
It's also nice to also show like we are innovating in this and like there are procedures, we are constantly trying to improve this. 
I think it's nice for people to also be able to learn a little bit more about this process and the way it's being handled. 

P2: 
Yeah, and I like that because before we were also talking about looking at this, I think before [name] you mentioned that like seeing the confrontation more as a dialogue thing and something that is not perceived negative, but it's really something that is being seen as positive because it allows for improvements. 
I think that's also key indeed in that way of communication that it's not really about us versus them or like this government versus public thing. 
Simply because of how the system is made, there is this intrinsic kind of two perspectives to things, but if there is openness for both of the parties to improve the system, I think it's good. 
I usually really like when governments open platforms asking the public or the citizens for feedback, for example the city where I live, they open up this app to provide information about what were the roads that are most dangerous or ideas or even things about what can we do to solve the situations and I think it would be great if a lot can be done with that and I hope they're going to do that, but even having the ability to share these ideas and frustrations or insights I think is something super valuable and I think the same mindset can and should be applied to these processes. 
I think many of the ideas we mentioned come and try to do that in different places in the process. 
And then I was thinking about the, we now talk a lot about the public as the people that are being investigated to be able to give feedback, but we also have a set of users that are those humans that are enforcing the law. 
I'm not super sure whether that is in the diagram or whether it's there, but even their feedback which might not be directly affecting the AI or the algorithm have a lot of value and maybe should have even higher value because they have to deal with the algorithm and how that applies to reality in a longer temporal space so they might be able to spot... 

R1: 
Shortcomings as well. 

P2: 
Exactly. 

R1: 
So in the diagram, basically it's the people here in the upper left, right, who translate automated decisions into, predictions into, ultimate decisions. 
You can imagine a feedback loop from them to the development. 

P2: 
Exactly. 
Yes. 

R1: 
Totally. 
Yeah, that makes sense because you're saying that's because they have a first hand experience, they see how this system basically plays out... 

P2: 
Especially when it comes to patterns. 
If something keeps happening and the AI is not smart enough to correct itself over time, then we really need that loop to happen and possibly before it needs to go through the law. 

R1: 
Yeah. 

P2: 
It seems to have a better experience I guess. 

R1: 
Yeah, good point. 

P1: 
Yeah, I thought about it because we also have been talking about the human side a lot and the ethical part, but also the results should also be a big part of it. 
It could be that and that can also be a democratic discussion that you have about it. 
But if, for example, you take out metrics because a lot of people feel like no that is too private or that is not ethical, at some point you get, of course, to place or of course, like maybe not, but it could be that you get to a place where the results are getting lower and lower because you're not getting all these metrics together anymore. 
That is also interesting to also have this discussion with each other. 
How important is it to have these results next to, like is it still ethical? How important is it to us to have a fair system? 

R1: 
So balancing different priorities and weighing them against each other. 

P1: 
Yeah, the human side is very important, but there should also be a discussion about like are the results also reflecting what we want, yeah, just like an addition. 

P3: 
Yeah, now since you mentioned the results, I'm just thinking it's kind of, we didn't discuss this, but probably the results for one type of group is different rather than the other. 
And it's sort of a game of, am I going to get caught or am I going to be mistaken for a, you know, for a person that's breaking a law? And if, I don't know, decision makers are predicting that there's this amount of people that are breaking the law and the tool is, I don't know, like this AI algorithm got some metrics taken away because they were not performing. 
For me, it kind of falls apart when you think about the result, because it feels like it could be that we're correcting so less and less people are getting caught falsely, but we don't know if that's really false or not. 

P1: 
Yeah, that's a valid point. 
So also, like, is there an estimation if, like, how many people are actually breaking this law? And how can you be sure that the people that you accuse in the end are also, or that you are fining, are also frauds? Like there are also some errors in that system. 

P3: 
Yeah. 
How do we account for the error? 

P2: 
Maybe that becomes more of a technical challenge in the sense that you could also say every time we improve the current algorithm, we go for, I don't know, a period of time we're both working at the same time, and then we see which one of the two is delivering better results or is still, and you do that every time over time. 
I don't know, it's just maybe there are better statistical approaches to that, but I think that's a good point indeed, that how do you know that you are actually improving if you're only looking at the one number of results? 

P3: 
Yeah, exactly. 

P2: 
And it triggered me what you were saying now, because I was also reflecting on the fact that in terms of transparency and sharing things, it wouldn't be, because of course, we have a bunch of people in society that are able to understand how an AI system works or even look at the algorithm, I don't know how open it should be, but maybe it's literally given the code, but there's so many that can't do that. 
I doubt that you can make this whole process legible for everyone in society, but at least we should aim to make it as big as we can, and I was thinking what if for each AI system that the government is using or the municipality is using, there is every year or every two years a report sharing some of the results of not only how it works, but also what it allowed us to do, because that can also help to understand why do we have it in place, what is it helping to do, like we started even with the idea of the parking cars that are allowed. 
No, actually with this example of the renting issues, we are actually one part of the things we're trying to solve is to remove some of the annoying aspect of the job and focus only on the part that is more human as well. 
That is something that I think would be good to see back in a report, either in numbers or even it's, I don't know, maybe interviews of the people that are working with it. 
We also keep bringing it back to why are we using these systems and not so that they don't stay as this black box of unclear computer that is manipulating society, but we bring it back to the use that it has for us as society. 

P1: 
Yeah, no, I think that there are some very interesting communication moments as well, and also for this, I also remember that when we were talking about this, at some point I was also just thinking, why is this really important? Why would I report on my neighbor, for example? There should also be very clear why is this even interesting to report about or why are we investigating something and not just we got a report, but also because it's important that people that want to live in Amsterdam can actually live there instead of just people renting out these places to earn more money, to create businesses. 
So I think that is also a big part of taking away, well, maybe you cannot take away all the frustration, but maybe you can just have this softer approach, but also make people understand like wherever they are, but also in just like getting people to think about like how to improve the system to always make clear like why are we doing this and why is this important to the municipality of Amsterdam? 

P2: 
It was one last thing we discussed a lot at the beginning, but we didn't mention it now, which was related to asking for better information during the reporting itself, which might influence the AI or not, but it was just something we talked about, I think it's worth mentioning again, which was related to helping the reporting to collect more useful data that can lead to figure out more easily whether something is the case or not so that we don't rely only on information available in the public domain, but more on the governmental domain, but more on the reporting itself so we get more data to also train the AI potentially. 

P1: 
Yeah, and would that then be used like on this side or on this side or on both? Like would it be in monitoring or the policy and system development? 

P2: 
My mind was even more at the very beginning, yeah, like on the human-AI system, because indeed then it becomes also very specific and very private. 

P2: 
We were also talking about how good it is to ask a citizen to basically perform part of the fact checking for you, and I didn't mention it before, but I was also thinking sometimes these situations happen also with people that are special neighbours in this setting, who are frustrated with each other, so the chances that I'm also going to lie knowing what effect it will have on the neighbour only to get rid of the frustration is also high, so I'm not sure how much we should rely on that, but maybe it's more in the way they are designed to be, so if you are on the phone and someone is asking you specific questions, like maybe that's already the case, because we don't know anything about the actual reporting how that is being performed at the moment, but what if while you are on the phone they actually ask how many, how long is the situation been going on, do you know that they are not friends, do you know that they are paying, can you tell us some of these proofs, and of course, like in any investigation that everything that has been said needs to be checked again, but having that might already influence a little bit more the qualitative data that you bring in the system, and the question is how much of that is going to be fed into the AI since it's not data that is already being checked, or how much of that you leave on it to the humans as something to deal with that basically is in the, I guess, the human-AI system block when the human is deciding to press yes or no. 

P1: 
Yeah, no, but it is very interesting, and also maybe when you collect this and you create a profile, then that could also lead to new metrics, so I think that that is very, very interesting. 
Also, like it can be this one angry neighbor that keeps calling about you, and then you have like a very big profile, but maybe it becomes more interesting if you have two very angry neighbors, like those kind of things can be very interesting to write down somewhere, yeah, to collect.