Episode 12: Dr Stefanie Czischek

Many machine learning techniques have been developed to run in an efficient way on classical computers, but what could we gain from designing them around biological brains instead, and how might these methods tell us something new about quantum systems? Take a listen to Episode 12 of insideQuantum to find out!

This week we’re featuring Dr Stefanie Czischek, an Assistant Professor at the University of Ottawa. Dr Czischek obtained her PhD from the University of Heidelberg and did a postdoctoral position in the Perimeter Institute Quantum Intelligence Lab, before taking up her current position.



🟒 Steven Thomson (00:06): Hi there and welcome to insideQuantum, the podcast telling the human stories behind the latest developments in quantum technologies. I’m Dr. Steven Thomson, and as usual, I’ll be your host for this episode. In previous episodes, we’ve talked a bit about quantum computing, the theories that underlie it, and some of the hardware on which it might be implemented, but there’s more to quantum technologies than just building a slightly different type of computer. Cutting edge developments in machine learning and artificial neural networks could lead to a dramatic leap forward in our ability to simulate and understand quantum matter. Today’s guest is working on implementing neural networks in quantum technologies. It’s a pleasure to welcome Dr. Stefanie Czischek, an assistant professor at the University of Ottawa. Hi Stefanie, and thank you for joining us today.

🟣 Stefanie Czischek (00:50): Yeah, hi there. Thank you very much for inviting me. I’m very glad. It’s a great pleasure to be here today.

🟒 Steven Thomson (00:56): So before we get onto neural networks and the topic of your research let’s first talk a bit about your journey to this point, and let’s start at the beginning. What is it that first got you interested in quantum physics?

🟣 Stefanie Czischek (01:10): Yeah, that’s a really interesting question and I actually just got this question last week from my students during my first lecture, and I realized that, yeah, it’s really hard to answer for me. I don’t think there was really one point that got me interested into quantum physics. So just after high school I decided I wanted to study physics and I started doing my bachelor in Heidelberg in Germany, and I really liked it and so I continued with my Masters and yeah, in the end I found a really nice project from my Masters thesis, which was about numerically simulating quantum systems or quantum many-body systems out of equilibrium. And yeah, this got me really interested. I really like this topic, and so I decided to continue with a PhD, where then I started to also add artificial intelligence to this topic, and this was really the point that got me excited about this field and where I really saw, I want to study this in more detail.

🟒 Steven Thomson (02:10): So was it during your PhD then that you decided that this was the career for you, something that you wanted to keep doing long term, not just as part of university, but as part of a job?

🟣 Stefanie Czischek (02:22): So during my PhD, I definitely realized that I would love to stay in the field of artificial intelligence, and ideally I would like to combine it with quantum physics. So I really like this application of artificial intelligence in the field of quantum physics. At this point, I didn’t really plan for an academic career for my future. So during my PhD, I still thought, well, after my PhD I would just go to industry and do some research job. But yeah, then in the end everything went different, everything worked out, and I’m still in academia.

🟒 Steven Thomson (02:55): So normally I ask people, if you weren’t doing your current job, what is it you would be doing? I guess you mentioned there industry. What kind of industrial opportunities were you interested in?

🟣 Stefanie Czischek (03:06): That’s a great question. I have to admit that I didn’t look into it in detail. Yeah, I basically looked at different companies, for example, car companies who are working on using artificial intelligence to develop self-driving cars or different software companies who are trying to implement artificial intelligence, and they have pretty huge research departments there as well. So I started looking in this direction and I probably would’ve ended up there.

🟒 Steven Thomson (03:36): So if you’d gone into industry, you would probably have left the quantum physics side of things behind and gone fully down the AI machine learning kind of route?

🟣 Stefanie Czischek (03:46): I think so, yes, just because especially in Germany, it would’ve been much easier to find a job without the quantum part, but only with the AI part.

🟒 Steven Thomson (03:55): Yeah, that’s true. There are a lot of jobs at the moment, it seems, for machine learning engineers and data scientists and all that kind of thing. And I guess slightly fewer jobs for quantum physicists, although I suppose that might be changing in the near future.

🟣 Stefanie Czischek (04:09): Yeah, that’s true. Yeah, I think so too.

🟒 Steven Thomson (04:13): So you’ve mentioned there you’re interested in combining quantum physics with machine learning. What’s the goal of this? What do you think is the biggest challenge in your field at the moment that people are working towards?

🟣 Stefanie Czischek (04:26): I think the biggest challenge is scaling up to big systems. So I mean, the idea of what we are doing with at the intersection of artificial intelligence and quantum mechanics is to use neural networks to simulate qubit systems or quantum many-body systems. And we want to have a classical numerical simulation of these quantum systems. And this is where the artificial neural networks come in, as in very efficient wave function ansatz, which we can use to classically simulate prepared quantum states. And this is a very efficient method and it works well for several states, but still if we want to go to large qubit systems - which are not capable to be described exactly, in the quantum field - in these cases, also classical simulations start to struggle and we still need to scale everything up, which means we have huge computational costs, which we at some point just cannot cover anymore. And I think this is the main challenge that we still need to find a way to simulate these large qubit systems.

🟒 Steven Thomson (05:34): And then this is where you come in with your work with neural networks. So, neural networks might be familiar to some of our listeners who are quite up to speed on modern developments in technology. But for anyone who’s not familiar with a neural network, could you give us a brief summary of what one is and why they’re useful?

🟣 Stefanie Czischek (05:54): Yeah, so neural networks, I mean, we know that they appear in various different fields nowadays. We use them pretty much every day in our phones or in the cars or wherever. So the idea is…like, a very standard example of an application for artificial neural networks is the classification of an image. So the basic example is you give the neural network an image and it tells you whether there’s a cat or a dog or a rat or whatever in the image. So some kind of animal. This is also of course used for self-driving cars. So you want to know whether there’s like a speed sign in the image of a camera or something like this, and these classification tasks you can use in the field of quantum mechanics. So this is something we did recently where we used this to detect transition lines in an experimentally measured diagram - a phase diagram - to automatically tune a quantum dot experiment.

(06:55): So this is one approach where networks can help to advance quantum mechanics. You can use them to automatically tune experimental setups and to reduce the amount of human interaction. Another approach is to use neural networks kind of as dreaming devices. So there are neural networks which are called generative neural networks, and you can train them to encode a probability distribution and then they can dream, meaning they can produce samples or data following this probability distribution. And this is where we use it to simulate many quantum many-body systems where we try to train our network to encode a probability distribution describing a quantum state, and then we can generate more measurement data from this classical neural network instead of running the quantum experiment.

🟒 Steven Thomson (07:46): I see. Okay. You mentioned a minute ago there how neural networks can be a very efficient wave function. There are other ways that you can classically simulate quantum systems, of course. I think one of the most well known is probably tensor networks. How do neural networks compare to tensor networks? I think they’re…they have very different underlying principles, but in terms of the types of simulations you can do and the type of computational power that you get, are they roughly equivalent or are they good at different things?

🟣 Stefanie Czischek (08:20): I think they are good at different things, definitely. So I think it’s really hard to compare them with tensor networks. You know that they are very efficient for one dimensional systems. And you also know that in one dimension the computational cost scales exponentially when you have a strongly entangled state. This is different for artificial neural networks. So if you have an artificial neural network wave function ansatz, it doesn’t care too much about the entanglement. So this has been shown, so you can reconstruct entangled states without without the computational cost scaling too much, and you can also go to higher dimensional systems. So this all works very well, but of course you have other downsides, so it’s not clear which states exactly can be represented efficiently with artificial neural networks. Also, you always have this variational training part, which means in contrast to tensor networks where you know pretty much exactly which error you’re making when you’re representing a state. This is not given for artificial neural networks, so you never know how exactly you are representing your state.

🟒 Steven Thomson (09:28): I see. So they’re suitable for a very wide range of things, but the limitations are maybe not so precisely known as another many body techniques.

🟣 Stefanie Czischek (09:39): I mean, you also have to take into account that this is a rather new field, while tensor networks have been explored for years, so this might still change. But yeah, the limitations are not that well known as for other numerical methods.

🟒 Steven Thomson (09:51): I see. Okay. And as I mentioned a minute ago, I think a lot of listeners might be quite familiar with ideas of neural networks, machine learning, and similar concepts. They’ve become a huge part of our lives and a huge part of modern technology over the last few years. But yes, these machine learning techniques, they are classical techniques at heart, and there are a couple of different ways that you can apply machine learning to quantum systems. Can you tell us a little bit about in particular what the approach is that you’re taking here? So you’re using classical techniques to understand quantum systems, but you’re not deploying machine learning or neural networks on quantum hardware, Is that correct?

🟣 Stefanie Czischek (10:34): That’s correct, yeah. So yeah, this entire intersection of artificial intelligence and quantum mechanics can be approached from two sides. So either you can use artificial intelligence to advance quantum technologies, which is what I’m doing, or you can go the other way around and you can use quantum computers to advance artificial intelligence or artificial neural networks which yeah, is also a very interesting field and there are quite a lot of people working in this field, and it’s also very interesting. It’s just not my focus right now.

🟒 Steven Thomson (11:08): I see. So what are the big near term goals then for the application of neural networks to simulating quantum systems? Are there any particular challenges that people are excited about or is it still very exploratory that you’ve got this new technique and there are a lot of different things that can be studied and people are just still trying to study and classify as much as possible?

🟣 Stefanie Czischek (11:31): Yeah, I think so. There’s definitely still a lot of work going on with exploring the field since it is pretty young and pretty new. People are still considering different network architectures, like restricted Boltzmann machines or recurrent neural networks. So people are trying to explore which network architectures are most efficient and from which you get the most benefits, which can represent which kind of states most efficiently and so on. And then there’s of course also the goal to look at systems that could not be simulated classically with the existing numerical methods. So where can we go beyond these limitations that we experienced so far and can we gain any new information from our neural network ansatz?

🟒 Steven Thomson (12:16): So you mentioned that for neural networks, one of the crucial components is this training stage, and you’ve talked a bit about how you can apply neural networks to systems that are very difficult to study with existing methods. How do you train a neural network then on a problem that can’t be solved by another method? Where do you get the input data for it?

🟣 Stefanie Czischek (12:38): Yeah, this depends a little. So they exist two different ways how you can train a network to represent a quantum state. One is just based on measurement data. So for example, if we assume we have some quantum experiment which can prepare some state, then we can perform a projective measurements, we get a set of measurement data. But usually with state of the art experiments nowadays this data that we can access is still limited because it still takes quite some time to prepare the quantum state after every measurement because the state gets destroyed when we do a projective measurement. And so the amount of measurements that we have is still limited, but we can train our neural network on this limited amount of data, and once it is trained, we can generate more measurement data and with this reduce the variance of our operator expectation values. So this is one approach, and I mean you don’t really need a numerical model describing the state because you can just directly train it on experimental data, in the ideal case.

(13:42): A second approach how to train these networks is to reconstruct a ground states of a given Hamiltonia where you train the network such that is, it minimizes the energy expectation value, so it converges to the ground state and gives you the ground set of a given Hamiltonian. So this can be done without any knowledge of the system or any numerical simulations or whatever. So you don’t even need to be experimentally prepared, you only need an expression for your Hamiltonian, and you can train your network on this. What you also can do - what we did in a very recent work - is to combine these two. And so together with my collaborators at the University of Waterloo, we recently showed that you can use this experimental training data, so a very limited amount of experimental training data, to enhance the performance of this ground state search. So you kind of pre-train your network on the experimental data, and this brings you actually closer to the ground stage when you, after this, just minimize the energy.

🟒 Steven Thomson (14:43): Oh, that’s really interesting. That was going to be my next question actually. What happens if you combine the two? Oh, that’s really interesting to learn that it works and what type of systems does this work for? Does it work for any quantum system or are there some particular properties of ground states that you require? So for example, for listeners who are familiar with many body physics, to come back to the tensor network example, these work very well for what we call gapped phases. Is there any restriction like this for neural networks? Is there any particular class where they work very, very well, or do they tend to work very well on any problem that you try to use them for?

🟣 Stefanie Czischek (15:20): Yeah, this is a really tough question. I think it’s not been explored in this detail yet. I remember that there are some issues if you have a highly degenerate ground state, so multiple states that give you the same ground state energy, in this case because you kind of sample states from your network, you might not sample all of the states that contribute to this degenerate ground state. So this is something where they might be definitely limited. But yeah, I think it has not been explored yet, which states are good and which states are more difficult.

🟒 Steven Thomson (15:59): I see. I guess the extreme example of a system where you have many possible states would be something like a spin glass, which was the subject of last year’s Nobel Prize - at the time of recording, last year’s Nobel Prize, by the time this comes out, there may be a new Nobel Prize announced. But yes, I guess in the case of something like a spin glass, that’s an NP hard problem. And I presume classical neural networks still cannot solve the NP hard problem.

🟣 Stefanie Czischek (16:25): Yeah, that’s true. Yeah.

🟒 Steven Thomson (16:27): Okay. But then for a more generic system that doesn’t have this kind of extremely hard computation problem, they seem to work quite well, even I guess in two and three dimensions where things like tensor networks are starting to struggle.

🟣 Stefanie Czischek (16:41): Yeah, exactly. So for the models where they have been benchmarked, they usually work quite well. But again, you also have to take into account that, so in your neural networks, you have so-called hidden neurons, which, so if you have more hidden neurons, you increase the expressivity of your network. So I think in the end, you can always represent the ground state and you can find the ground state, but you might require a huge amount of hidden neurons. So for some states it might still be very computationally expensive but again, it has, I think to my knowledge, it has not been explored how the number of neurons behaves for different kinds of states in very detail.

🟒 Steven Thomson (17:27): Oh, I see. So it is a very new field with a lot still to learn, I guess.

🟣 Stefanie Czischek (17:33): Yeah, for sure. Yeah.

🟒 Steven Thomson (17:36): So one of the things that you’ve mentioned in some of your research is that you work on what you call biologically inspired neural networks. What does this mean? What are the things that we can learn from biology, from nature that we can apply to neural network simulations of things like quantum systems?

🟣 Stefanie Czischek (17:55): Yeah, so this is definitely one of my favorite topics. So I like to work on this intersection of biologically inspired neural networks and quantum systems. And the motivation behind this is that if we look at our artificial neural networks that we all know pretty well now, then as the name suggests, initially they were inspired by the biological brain, but over the time they have evolved pretty far away from the brain and you can already see this by their setup. So you can imagine that your brain doesn’t have a leyered neuron set up, but it’s more like a fully connected or randomly connected network of neurons. And this development…I mean on the one hand it’s good because these neural networks have been developed with the motivation to optimize their performance on our conventional computers, and that’s why we can use them now. But at the same time, we are missing some of the benefits of the biological brain, which is on the one hand extremely small and also extremely energy efficient.

(18:54): So if we compare our brain, which has about 86 billion neurons, with the supercomputers that our artificial neural networks are running on, then you can see that our brain is much smaller and consumes way less energy. It’s also extremely fast. So if you just see an image for a few microseconds, you can already recognize this image and you can interpret it. And this is much faster than our artificial neural networks. And so people are now trying to get this biological aspect back into these neural networks. So this brings us to these biologically inspired neural networks, which hopefully bring these benefits of speed. Yeah, small size and energy efficiency back to artificial neural network applications. And the reason why I’m interested in combining this with quantum physics is that I hope that from these biologically inspired algorithms, we can maybe get more insights into quantum mechanics so we can hopefully overcome the limitations of our conventional computers and artificial neural networks. And then at the same time, because these biologically inspired neural networks are so small and so energy efficient, we can hopefully integrate these networks in quantum experiments, which can, for example, be tuned with artificial neural networks. And so this energy efficient and small setup proposes to directly integrate these artificial or these network algorithms in the experimental setup, minimizing the interaction with the outside world and the amount of data transfer to classical computers.

🟒 Steven Thomson (20:26): I see. And do you think that in the future there will be specific bespoke hardware designed for things like these biologically inspired networks? You mentioned there that they’re not optimized for current classical computers, but can you imagine a future where we have a specific neuromorphic neural network chip that we use to perform these types of computations?

🟣 Stefanie Czischek (20:46): So this is actually not the future, this is the present. So you can imagine that simulating these biological networks on a conventional computer is extremely expensive and takes quite a lot of time. And so people are developing analog neural hardware, which are basically, for example, electronic circuits or photonic setups, which emulate these biological neurons and biological neural networks. And these chips are really very small and extremely energy efficient, and they’re also in speed comparable to biological brains.

🟒 Steven Thomson (21:19): I see. And is the hope that over time these biologically inspired neural networks will eventually overtake the more conventional machine learning approaches, or is there space in the world for both that the conventional approach will always be the more efficient approach on large scale computer clusters, but these neuromorphic networks are going to be useful for different applications?

🟣 Stefanie Czischek (21:42): Yeah, I think it probably depends on the application which kind of neural network you want to use. So, neuromorphic networks are just by nature more efficient for time dependent data. So if you want to classify a video, because these neuromorphic or these biologically inspired neurons, they are continuously moving in time. They can directly react to changes in a video, which is different to artificial networks where you would need to discretize the video and send in picture by picture. But on the other hand these neuromorphic neuro networks are probably less efficient for static pictures. So I think it totally depends on the application that you want to do.

🟒 Steven Thomson (22:24): I see. Okay. So this whole field, it sounds really, really new and exciting, and there are a lot of really interesting open questions and a lot of different things that you could do with these networks. But is it also challenging to work in a field where so little has been done? I mean, there’s so much possibility, but also means you have to reinvent everything yourself from scratch. And in the current academic reality that we find ourselves in this publish or perish culture, did it feel like you were taking a risk going down this sort of route instead of playing it safe and going with more conventional machine learning? Or was this something that you always felt was going to work and going to lead to something really interesting and worthwhile?

🟣 Stefanie Czischek (23:13): Yeah, that’s very interesting. So I just started to look into this field as a young PhD student when the first papers on this topic came out. And I just found it exciting, and I really loved this topic and I wanted to learn more about it because at this point I had already taken some machine learning classes and I had some quantum classes, so I said, “Okay, I have the ingredients, I can put it together”, and I wanted to work on this. And I never really thought about whether this is a risk or not. And then at some point I started to also combine this topic with the neuromorphic computing, which was mainly inspired by the fact that at Heidelberg university, there’s a group who is building neuromorphic hardware, meaning they are using electronic circuits to simulate these biologically inspired neural networks, which makes everything way more efficient than running them on a classical computer.

(24:10): And just driven by this possibility to get access to the neuromorphic hardware, I said, “Okay, I want to figure out if I can combine all of this”. And as a student, I just wanted to do it. I did not think about whether anyone would be interested or not. Actually, people in Heidelberg were very excited about this. When I started to publish my first papers, I figured out that, well, people in the rest of the world are less excited about it, , which definitely has changed until now. But at this point in time, I was just happy that I could do what I really liked. And yeah, as I said I did not plan on an academic career at this point, so I did not worry too much about taking the risk of exploring something new that no one is interested in.

🟒 Steven Thomson (24:58): I see. And then from there, you did end up with an academic career and you ended up moving from a PhD in Germany to a postdoc in Canada, which - as anyone who’s taken on a new job in the last few years will probably be aware - this involved moving countries in the middle of the pandemic. How did this affect your research and also I guess your life? It’s already a big deal to uproot your life and move to another country for your work, but to do this in the middle of a pandemic when everything is being done remotely and we’re missing a lot of human interaction, that feels like it must have been even more difficult than it would’ve been at any other time. How did you navigate these difficult years and do you have any advice for anyone else who might be facing similar challenges in the future?

🟣 Stefanie Czischek (25:42): Oh yeah, for sure. So yeah, it was definitely not an easy time during the last two years. I moved right at the beginning of the pandemic. I moved in April, 2020. And so the pandemic pretty much hit right after I made the decision to go to Canada. And there was pretty much no way back. And I remember this day where I got on the airplane and I really had no idea where I would end today. I was not sure if I got into the country or not, but in the end it worked out and I arrived in Canada and then I worked from home for one and a half years of my two year postdoctoral contract. So it was pretty lonely, it was pretty difficult, it was really hard to make friends or get social interactions. But I think on the other hand, this was also very helpful for my research because I didn’t really have anything to do.

(26:35): So I spent a lot of time working. And on the one hand, this brought my research further, but also it helped me a lot to stay active and I just try to look forward and this is really what I could recommend to people. Try to always look forward and keep on going and at some point things will get better. And what also helped me quite a lot is that I started to reach out to my new teammates and I asked them to meet for virtual coffees. And so we just both sat in front of our computer and chatted for half an hour. And this was something which was pretty difficult for me at this time because I was not used to doing this. But in the end it really turned out well and it helped me a lot to just get connected to the other people and then also my teammates, which were all the students in postdocs in the Perimeter Institute, Quantum Intelligence Lab, or The PIQuIL in short, they are really an amazing team with a really great spirit. And they gave me a very warm welcome and this made things much easier for me. And so what I learned from this, and I really hope that…especially now where I’m a professor myself, I really hope that I never forget this lesson that I learned because it’s really important that you care about the wellbeing of not only your peers but also your trainees and the team. And this makes things just so much better for everyone if you yourself care about everyone.

🟒 Steven Thomson (27:58): I think video calls being so common and so easy is probably one of the legacies of the last few years that I hope will stay with us and a lesson that we’ll learn because yes, it does make collaborating with people at a distance much easier. Also, it means that we can all work a bit more remotely and a bit more flexibly and so on, and it means we can conduct podcast interviews on different continents without too much trouble. So yes, that’s something that I think - I hope video calls remain a common feature in academia because it has opened up a whole new dynamic that wasn’t really there before the pandemic. So it’s great that we’re about to mostly in person conferences and mostly working in the office again, but I hope that we still keep a video call component to things like hybrid conferences and just the ability to have a chat with someone when you see an interesting paper and just have some interesting discussions that you probably wouldn’t have in any other way.

🟣 Stefanie Czischek (29:02): Yeah, I totally agree that video calls are a great thing, and I love being back to back to in person meetings and conferences, but it’s still great to have this opportunity. So for example, this term, I am teaching at the university, so I cannot travel to conferences, but I can still participate with talks at conferences because now most of them are hybrid, and I think this is something we should definitely keep for the future.

🟒 Steven Thomson (29:28): Definitely. Yeah, it makes conferences so much more accessible when people have teaching duties, family duties, care duties, or people who simply don’t have the funding or the grant money to travel long distance to conferences, having a hybrid option or even just putting the videos up on YouTube or something after the conference, it just makes all that knowledge so much more accessible to people and allows people to learn it and build on it in a way that five years ago wouldn’t have happened. So I really hope that this is a feature that conferences going forward still retain. So you’ve also recently started a new position as assistant professor. It seems like the pandemic years have not slowed you down any, You’ve been extremely productive in the last few years and have started an assistant professorship. Do you have any…any advice for any other early career researchers who might be at a similar kind of threshold looking to try and get their first foot on the ladder and get an assistant professor type job? It’s so hard to get these jobs, what’s…what’s the secret?

🟣 Stefanie Czischek (30:35): I think the true secret is just having a lot of luck, being very lucky . Yeah, definitely. For myself, I’ve never been very convinced of myself. I have to admit I wouldn’t even think about applying for an assistant professor position because I thought, “Well, I’m way too young, not good enough for this”. And at some point, luckily people pushed me in the direction to apply for these positions. And so I started this and I thought, “Oh, well, I can give it a try. I can see how it works out”. And in the end, it did work out. But what I learned from this is that I’ve just been very lucky that the University of Ottawa was just looking for someone in pretty much exactly my field at the time that I was looking for a position. And then also I was pretty restricted because for me, it was always very important that my partner or my husband moves with me to the place where I go.

(31:33): So I would never go somewhere alone. And so yeah, my husband actually wanted to stay in Canada, and I said, “Okay, then I will look for positions in Canada”. And it worked out. And I think I was just really lucky. And of course, you need the skills to become an assistant professor, but during my career, I have met so many of my peers who definitely have the skills, but they are struggling with finding a position. And I think at this point, it’s really just being lucky, having the chance to get a position in your field at the time that you’re looking for it. And there’s not much I can do. And I think the only advice I can give, which yeah, it’s probably just easy to say for me now, but it’s not that easy to realize it is to just not be dragged down by not getting or not finding a position, by having a hard time finding a position, because it really involves so much luck that it’s not your fault, it’s just…it’s just the situation. So maybe you have to wait a couple of years until the moment that the position that is made for you comes up.

🟒 Steven Thomson (32:38): I think there will probably be a large part of our audience, audience who can relate to that and who probably feel quite relieved to hear that advice. . So there’s one question that I like to ask every single guest on this show, which is that historically speaking physics has been a very male dominated research field. It’s been very dominated by white cisgender men, in general, for a very long time. It feels like things are improving, it feels like things are getting better, albeit not quite quickly enough. So I wanted to ask two questions. The first is, in your experience, have you seen things change at all over the course of your career? And also, have you noticed different attitudes towards discrimination and equality in the different countries in which you’ve worked?

🟣 Stefanie Czischek (33:25): Yes. So I mean, as a woman in physics, this has always been an important topic to me. Yeah, I’ve definitely experienced that. The field is still very male dominated. I also experienced that EDI - so equity, diversity and inclusion - is a much more important topic, or it’s just a more present topic here in Canada or North America in general than it is in Germany. So I definitely noticed over the last years that I was made more aware of this topic during the last years. But thinking about my career, I also find that in both countries, I rarely experienced moments where I felt treated unequally compared to men. So these moments still exist, but from my experience, they got pretty rare. And so I think the mindset has already changed a lot, and for most people…so based on my experiences, I would say the mindset is…it’s still a problem, and we still need to work on finding the right mindset to treat women and men equally in physics.

(34:34): But I think this is not the main problem anymore. And the main problem, which I also now experience as a professor, is that the pool of applicants for a position is just not diverse enough. So if I want to build a diverse research group, then I also need a diverse pool of applicants to choose from. And if this diversity is not given, then yeah, it will always be a male dominated field if they are not enough women applying for positions. And I think this is the point where we need to get up and do some outreach, get outside, share our stories inspire young students and show them that they can definitely have a really nice career in academia and in physics, and just encourage them to follow their dreams and live their lives and consider a career in science.

🟒 Steven Thomson (35:26): It’s interesting that you say that these attitudes are not so prevalent in Germany. In fact, several of our guests who have either come to Germany from other countries or have been visiting Germany when I’ve spoken with them, have also said similar things - that the idea of equality and diversity is not quite so high in people’s minds here in Germany, whereas in North America in particular, I’ve heard that these conversations happen a lot more often and a lot more loudly, and it’s something people are more aware of. So that’s very interesting to hear that several people are having very similar experiences in Germany versus North America. One final question to end on then, which is, if you could go back in time and give yourself just one piece of advice, what would it be?

🟣 Stefanie Czischek (36:18): Yeah I think, so…I mentioned it earlier, I’ve never been very convinced of myself and of what I’m doing. And if I could go back and talk to little Stef just for a minute, I would definitely tell her to be more confident, be convinced of what she’s doing. And yeah, I mean, now I feel like I ended up in a successful position. I just started my professor position, which I personally would define as a success for myself but this was also because quite a lot of people believed in me and pushed me in the right direction. I probably would not have ended up here if I did everything by myself, and I think I still missed out a lot just by not believing in myself and not being convinced of what I’m doing is good. And so this is definitely in that advice that I would give to myself if I could go back in time.

🟒 Steven Thomson (37:14): Yeah, definitely. I think, again, a lot of people will relate to that. Believe in yourself and believe in the people around you who tell you that you can do things maybe you don’t even think you can. Okay. If our audience would like to learn a little bit more about you, is there where they can find you on the internet or social media or anything like this?

🟣 Stefanie Czischek (37:34): Yeah, sure. So I’m on Twitter and my name is @sCzischek. I also have my new group website up, which is czischek-group.uottawa.ca. You can also send me an email. I think my email is mentioned on the website. And yeah, I guess you should also write everything in the description so people see how to write my name. [Editor’s note: Since the time of recording, Dr Czischek has also created an account on Mastodon: @sczischek@qubit-social.xyz.]

🟒 Steven Thomson (37:55): Okay, perfect. We will make sure to leave links to your social media profiles on our website, on our own website, and anywhere that we post the transcript for this episode. Thank you so much, Dr. Stefanie Czischek for your time today.

🟣 Stefanie Czischek (38:07): Thanks a lot for having me.

🟒 Steven Thomson (38:08): Thanks also to the Unitary Fund for supporting this podcast. If you’ve enjoyed today’s episode, please consider liking, sharing and subscribing wherever you’d like to listen to your podcasts. It really helps us to get our guest stories out to as wide an audience as possible. I hope you’ll join us again for our next episode. And until then, this has been insideQuantum. I’ve been Dr. Steven Thomson, and thank you very much for listening. Goodbye!