Podcasts

Podcast with Santanu Ganguly - Cisco

7
September
,
2021

My guest today is Shantanu Ganguly, a systems architect from Cisco. Shantanu has a new book out about Quantum Machine learning. We talk about this book, flying qubits, the challenges in moving quantum machine learning to production and much more.

Listen to additional episodes by selecting 'podcasts' on our Insights page

The full transcript is below

Yuval: Hello, Santanu. And thanks for joining me today.

Santanu: Thank you, Yuval. Thanks for having me. And most of all, thanks a lot for pronouncing my name correct. I'm really grateful.

Yuval: My pleasure. So, who are you and what do you do?

Santanu: My name is Santanu Ganguly. I currently work for Cisco Systems in UK. One of my duties at Cisco Systems today is being part of their steering committee for quantum tech internally, which is one of the internal projects. And I have a background in physics and math. I've always been passionate about anything that's science, especially quantum computing. So, I work closely with UK government projects and some enterprise financial bodies in the UK. That's me.

Yuval: First, congratulations on your new book on quantum machine learning. Could you please tell me a little bit about that?

Santanu: Thank you very much. Yes. It was a great opportunity offered to me to work on that. And initially, I thought it would be easy to do. It was not nearly as easy as I thought it would be. The subject matter is quantum machine learning. And the reason I chose that subject matter is because I'm very passionate about machine learning, in classical sense, what it does. And I'm even more passionate about quantum computing. And I've always been passionate about quantum computing. I have a background in math and physics. And last five, six, seven odd years, as quantum computing became more and more relevant to industry and organizations such as Google, IBM, et cetera, started to pump money into the technology and it brought out relevant developments, I got really more and more excited.

So, the book is about addressing the basics of quantum machine learning, algorithms involved, and most importantly, addressing a gap that I thought existed in most quantum machine learning literature, so to speak.

So, for example, there are monumental volumes by Peter Wittek. I think he wrote the first book on quantum machine learning ever back in 2011. That was followed by Maria Schuld and Petruccione's monumental volume on Supervised Learning in Quantum Machine Learning. And that was a great book as well. And this has been followed by an explosion of different algorithms, different research in literature and research media.

However, one of the things I always missed and struggled with myself initially when I started to look into this field, is how do all these algorithms, all these theories, which are very new, very growing, very recent, translate into code? So as I started to look into that, what became apparent is not every algorithm works efficiently on every quantum computing platform.

As we speak, there are several variants of quantum computing platforms today. D-Wave came out with the first commercially available quantum computer, and they use annealing. IBM came out with superconducting qubit, and they use a gate model. Rigetti came out with similar superconducting gate models. Xanadu and PsiQuantum are working on photonic quantum computers.

Going back a few years, more I looked into these algorithms, more it appeared to me that not every algorithm works efficiently on every platform. For example, right now today, there are studies that have been done, which claim that certain NP-hard problems, such as Maxcut, might work more efficiently on D-Wave’s computers using QUBO, than on gate-model computers using quantum approximate optimization algorithms, QAOA. So all these variations and challenges when it comes to applying algorithms to platforms is what actually triggered my interest in writing the book.

hence the book covers most major quantum computing libraries in existence today. It has Google Cirq, it addresses Rigetti PyQuil and QVM. It addresses IBM's Qiskit. It addresses D-Wave's platforms as well. And in order to actually efficiently gauge whether I'm doing the things correctly or not, I actually collaborated with D-Wave. They actually looked over the material that I have written in the book about them. And I got their okay to go ahead before publication. And I'm very grateful to them for that. So that's what triggered my interest, that I wanted to do something that gives the readers an option, an entry point to all different major platforms and hopefully an idea of which algorithm works best on what kind of platforms and how to get around them. So that's what actually caused it. Thank you.

Yuval: You mentioned that some algorithms work better on some machines. But there's even a bigger question. Is quantum machine learning useful today? Or when do you expect it to be useful in a production environment?

Santanu: That's a very good question. My response to that would be quantum machine learning in certain cases is definitely useful today. So one of them is drug discovery and molecule modeling. This is something that is being researched on. It is a current topic. But this is something that has also worked. I'll give you an example of that. Penn State University under Dr. Ghosh started to use quantum computing and machine learning to investigate vaccine modeling for the COVID virus. So that's one example of it.

The other example of where quantum machine learning is being researched upon and being used is in the financial sector. So for example, in financial risk analysis, in financial portfolio optimization pricing. So there are several published works from Goldman Sachs, from Chicago Quantum, who are actually looking into a quantum version of finance.

The reason the quantum angle is being investigated is because of the plethora of choices that you get. You get all sorts of probabilistic choices in a quantum computing and machine learning environment that you normally struggle to get in a binary environment. So it is being used and hopefully, as the usage goes up, we'll learn more and more about how to optimize different applications with different algorithms on different platforms. And we will improve as we did on our own standard classical computers. When I was a kid, a classical computer was a big thing sitting on my desk. Now it's in my pocket, literally. So more we use, more we should learn.

Yuval: When you mentioned running on different computers, I'm guessing you mean different types of architecture. PsiQuantum versus Honeywell, and not so much Rigetti versus IBM, both sort of gate-based computers, is that correct?

Santanu: That is correct. Absolutely. There are different architectures. Then there are other aspects of both machine-learning computing and communication. Let's say we have two different computers, quantum computers. These quantum computers could be two IBM's, could be one IBM and one Rigetti, could be one D-Wave and one Rigetti, whatever. How do these two computers communicate with each other? That has not been done. And again, this is a huge area of research where I'm involved and a lot of other research outfits are involved because this brings into question, how do you get information from a qubit level up to a level where you can communicate with another quantum platform? If especially the level is optics.

Right now, that's all we know. We think optics should be the communication media, but you have a superconducting quantum platform at minus 273 degrees centigrade, close to zero degrees Kelvin. And then from there, you need to get the information up to an optical level and then communicate with each other. So there are research bodies, for example, one in the University of Berkeley, that’s working on this for a long time. It's called flying qubits. So basically you fly the qubits from one energy level to another begin to transport them over the communication barrier. So yes, the architectural differences, play a major factor, as do basic science and application we are considering.

Yuval: Now, if I work at a commercial company, want to do chemical research or portfolio optimization, and I want to try machine learning, do I have to a-priori decide what hardware architecture will I run it on? How do I know if this is going to be best on D-Wave or IBM or something else?

Santanu: Right now, there are two ways. Way number one is kind of an approximate way. So you kind of guess, okay, I'm going to run a Maxcut algorithm, is this going to work better on a D-Wave or an IBM? And then you try it out. So that's one way. The other way is blind trial and error. How to resolve these differences in various platforms and this applicability or rather efficiency of different algorithms on different platforms, there could be, and I think the works are underway via various startups.

And if not, if there are other quantum computing companies, where people are thinking about building up stacks on top of the physical quantum computing and communication level, where we have some sort of an orchestration level, which will make the physical layer underneath invisible, so to speak. In other words, the user would not care if the platform they're programming on is an IBM or a D-Wave or whatever else, as long as they know what they want, which algorithm they're going for, my apologies, they program it. And then the underlying software stack takes the best platform for them and allows them to program on it.

So, this is something that people are working on under research, and possibly there are startups, one or two around who may even have the first-generation solutions for these. So this could be another way of actually addressing this disparity in architecture, an underlying layer.

Yuval: If we look at where we are today with quantum machine learning and fast-forward and say, okay, what does it need? What needs to happen for it to be a truly useful tool beyond one or two specific use cases? Then one thing that you mentioned is that abstraction layer or the ability just like Classiq does to go from a functional high-level model into a quantum circuit so you don't have to decide ahead of time which hardware you're using. The second thing is obviously an improvement in the noise characteristics and the number of qubits. So you could run larger models and run longer algorithms. Is there something else, is there a third or fourth thing that you envision that's missing for quantum machine learning to become mainstream?

Santanu: I wouldn't say it's missing, it's probably under research, not probably, is definitely under research right now, and people are working on it. And that's basically coherence. And you mentioned errors and you also mentioned disparities and an abstraction layer. So besides this, not everything is going to be optimized or usefully optimized by quantum computing or machine learning. For example, if you're using PowerPoint across the network, there's no point in running it on a quantum channel because PowerPoint would work, much better possibly on a classical network. So there would be, going forward, and from a futuristic point of view, there will be specific applications that machine learning will be useful for.

And to go back to your question, to address those, what we'll need is refinement in basic qubit processing. Right now, the way physical qubits are actually created is not perfect. So there needs to be optimization in that process itself, and then obviously we'll need more qubits. Right now, you get about 50, 53 qubits on a real quantum computer. And most of the algorithms that are being tested are testing on simulators. And then if you don't need enough qubits, they'll run on the actual platform. So we actually need that physical layer. We need the abstraction layer.

And then the other option and challenge is obviously scalability. So the problem needs to be meaningful. And in terms of a meaningful problem, we need to scale it. I'll give you an example. So you may have seen on the Internet that there was a traffic optimization study done in Southeast Asia. So the city they picked has humongous traffic congestion. And then they did a machine learning study based on some constraints, such as distance and people's work destination, which way do you get rational traffic, et cetera, et cetera. And then an optimized model, which is mathematically optimized. But if you look at that graph, you see that the machine learning algorithm is sending drivers from one point to another around the city, not through one of the quicker paths possible. Now, they are doing it based on a constraint of time, but how many people do you know who if they can drive from destination A to B in five kilometers is going to take a 20-kilometer roundabout way to save 10 minutes? Not many.

So, we need to find out how practical it is to run these things. Will it actually be a practical solution to the problem in hand? And how to scale these queries in a rush-hour platform. In a financial environment, for example, you may get hundreds of thousands of queries. Now, are all those queries going to be efficiently serviced by quantum machine learning? Or would it be 50:50? Some classical, some quantum? So these are areas where we need more clarity. And more importantly, once you have the clarity, we need more real-time decision-making. So, there should be a layer, be that AI governed, be that manually governed, which would say, okay, this is best served by quantum or, it is best served by classical - go your different ways, and here is some magic. So that's my take on it.

Yuval: As we get closer to the end of our conversation today, we spoke a lot about quantum machine learning, but obviously there are other areas in quantum computing, quantum key distribution, and others. Is there anything in particular that drew you to focus on quantum machine learning?

Santanu: Yes. Fundamentally, quantum computing is of deep interest. Attached to quantum computing is the great question today of security. That is also of deep interest. The other thing of deep interest is quantum error correction. There are aspects of quantum security and quantum error correction, both of which can be addressed by quantum machine learning: Reinforcement learning or supervised learning or unsupervised learning, some such algorithm. And there are research and studies that are ongoing on that. So my interest is tied to specifically these domains and that's where machine learning became interesting because it can actually solve some serious unanswered questions in quantum computing as it exists today.

Yuval: So Santanu, where can people get in touch with you to learn more about your work?

Santanu: Thank you very much. I'm on LinkedIn. If they do a search with my name and Cisco after that, I should pop up. Or quantum after that, I should pop up. Feel free to reach out to me via LinkedIn. And I'm usually quite proactive in answering. I'll be very happy and glad to respond to anybody who has any questions at all. Thank you.

Yuval: That's excellent. I enjoyed very much speaking with you. Thanks so much for joining me today.

Santanu: Thank you very much. Likewise. It has been great talking to you. It's been a pleasure and an honor. Thanks for having me.


My guest today is Shantanu Ganguly, a systems architect from Cisco. Shantanu has a new book out about Quantum Machine learning. We talk about this book, flying qubits, the challenges in moving quantum machine learning to production and much more.

Listen to additional episodes by selecting 'podcasts' on our Insights page

The full transcript is below

Yuval: Hello, Santanu. And thanks for joining me today.

Santanu: Thank you, Yuval. Thanks for having me. And most of all, thanks a lot for pronouncing my name correct. I'm really grateful.

Yuval: My pleasure. So, who are you and what do you do?

Santanu: My name is Santanu Ganguly. I currently work for Cisco Systems in UK. One of my duties at Cisco Systems today is being part of their steering committee for quantum tech internally, which is one of the internal projects. And I have a background in physics and math. I've always been passionate about anything that's science, especially quantum computing. So, I work closely with UK government projects and some enterprise financial bodies in the UK. That's me.

Yuval: First, congratulations on your new book on quantum machine learning. Could you please tell me a little bit about that?

Santanu: Thank you very much. Yes. It was a great opportunity offered to me to work on that. And initially, I thought it would be easy to do. It was not nearly as easy as I thought it would be. The subject matter is quantum machine learning. And the reason I chose that subject matter is because I'm very passionate about machine learning, in classical sense, what it does. And I'm even more passionate about quantum computing. And I've always been passionate about quantum computing. I have a background in math and physics. And last five, six, seven odd years, as quantum computing became more and more relevant to industry and organizations such as Google, IBM, et cetera, started to pump money into the technology and it brought out relevant developments, I got really more and more excited.

So, the book is about addressing the basics of quantum machine learning, algorithms involved, and most importantly, addressing a gap that I thought existed in most quantum machine learning literature, so to speak.

So, for example, there are monumental volumes by Peter Wittek. I think he wrote the first book on quantum machine learning ever back in 2011. That was followed by Maria Schuld and Petruccione's monumental volume on Supervised Learning in Quantum Machine Learning. And that was a great book as well. And this has been followed by an explosion of different algorithms, different research in literature and research media.

However, one of the things I always missed and struggled with myself initially when I started to look into this field, is how do all these algorithms, all these theories, which are very new, very growing, very recent, translate into code? So as I started to look into that, what became apparent is not every algorithm works efficiently on every quantum computing platform.

As we speak, there are several variants of quantum computing platforms today. D-Wave came out with the first commercially available quantum computer, and they use annealing. IBM came out with superconducting qubit, and they use a gate model. Rigetti came out with similar superconducting gate models. Xanadu and PsiQuantum are working on photonic quantum computers.

Going back a few years, more I looked into these algorithms, more it appeared to me that not every algorithm works efficiently on every platform. For example, right now today, there are studies that have been done, which claim that certain NP-hard problems, such as Maxcut, might work more efficiently on D-Wave’s computers using QUBO, than on gate-model computers using quantum approximate optimization algorithms, QAOA. So all these variations and challenges when it comes to applying algorithms to platforms is what actually triggered my interest in writing the book.

hence the book covers most major quantum computing libraries in existence today. It has Google Cirq, it addresses Rigetti PyQuil and QVM. It addresses IBM's Qiskit. It addresses D-Wave's platforms as well. And in order to actually efficiently gauge whether I'm doing the things correctly or not, I actually collaborated with D-Wave. They actually looked over the material that I have written in the book about them. And I got their okay to go ahead before publication. And I'm very grateful to them for that. So that's what triggered my interest, that I wanted to do something that gives the readers an option, an entry point to all different major platforms and hopefully an idea of which algorithm works best on what kind of platforms and how to get around them. So that's what actually caused it. Thank you.

Yuval: You mentioned that some algorithms work better on some machines. But there's even a bigger question. Is quantum machine learning useful today? Or when do you expect it to be useful in a production environment?

Santanu: That's a very good question. My response to that would be quantum machine learning in certain cases is definitely useful today. So one of them is drug discovery and molecule modeling. This is something that is being researched on. It is a current topic. But this is something that has also worked. I'll give you an example of that. Penn State University under Dr. Ghosh started to use quantum computing and machine learning to investigate vaccine modeling for the COVID virus. So that's one example of it.

The other example of where quantum machine learning is being researched upon and being used is in the financial sector. So for example, in financial risk analysis, in financial portfolio optimization pricing. So there are several published works from Goldman Sachs, from Chicago Quantum, who are actually looking into a quantum version of finance.

The reason the quantum angle is being investigated is because of the plethora of choices that you get. You get all sorts of probabilistic choices in a quantum computing and machine learning environment that you normally struggle to get in a binary environment. So it is being used and hopefully, as the usage goes up, we'll learn more and more about how to optimize different applications with different algorithms on different platforms. And we will improve as we did on our own standard classical computers. When I was a kid, a classical computer was a big thing sitting on my desk. Now it's in my pocket, literally. So more we use, more we should learn.

Yuval: When you mentioned running on different computers, I'm guessing you mean different types of architecture. PsiQuantum versus Honeywell, and not so much Rigetti versus IBM, both sort of gate-based computers, is that correct?

Santanu: That is correct. Absolutely. There are different architectures. Then there are other aspects of both machine-learning computing and communication. Let's say we have two different computers, quantum computers. These quantum computers could be two IBM's, could be one IBM and one Rigetti, could be one D-Wave and one Rigetti, whatever. How do these two computers communicate with each other? That has not been done. And again, this is a huge area of research where I'm involved and a lot of other research outfits are involved because this brings into question, how do you get information from a qubit level up to a level where you can communicate with another quantum platform? If especially the level is optics.

Right now, that's all we know. We think optics should be the communication media, but you have a superconducting quantum platform at minus 273 degrees centigrade, close to zero degrees Kelvin. And then from there, you need to get the information up to an optical level and then communicate with each other. So there are research bodies, for example, one in the University of Berkeley, that’s working on this for a long time. It's called flying qubits. So basically you fly the qubits from one energy level to another begin to transport them over the communication barrier. So yes, the architectural differences, play a major factor, as do basic science and application we are considering.

Yuval: Now, if I work at a commercial company, want to do chemical research or portfolio optimization, and I want to try machine learning, do I have to a-priori decide what hardware architecture will I run it on? How do I know if this is going to be best on D-Wave or IBM or something else?

Santanu: Right now, there are two ways. Way number one is kind of an approximate way. So you kind of guess, okay, I'm going to run a Maxcut algorithm, is this going to work better on a D-Wave or an IBM? And then you try it out. So that's one way. The other way is blind trial and error. How to resolve these differences in various platforms and this applicability or rather efficiency of different algorithms on different platforms, there could be, and I think the works are underway via various startups.

And if not, if there are other quantum computing companies, where people are thinking about building up stacks on top of the physical quantum computing and communication level, where we have some sort of an orchestration level, which will make the physical layer underneath invisible, so to speak. In other words, the user would not care if the platform they're programming on is an IBM or a D-Wave or whatever else, as long as they know what they want, which algorithm they're going for, my apologies, they program it. And then the underlying software stack takes the best platform for them and allows them to program on it.

So, this is something that people are working on under research, and possibly there are startups, one or two around who may even have the first-generation solutions for these. So this could be another way of actually addressing this disparity in architecture, an underlying layer.

Yuval: If we look at where we are today with quantum machine learning and fast-forward and say, okay, what does it need? What needs to happen for it to be a truly useful tool beyond one or two specific use cases? Then one thing that you mentioned is that abstraction layer or the ability just like Classiq does to go from a functional high-level model into a quantum circuit so you don't have to decide ahead of time which hardware you're using. The second thing is obviously an improvement in the noise characteristics and the number of qubits. So you could run larger models and run longer algorithms. Is there something else, is there a third or fourth thing that you envision that's missing for quantum machine learning to become mainstream?

Santanu: I wouldn't say it's missing, it's probably under research, not probably, is definitely under research right now, and people are working on it. And that's basically coherence. And you mentioned errors and you also mentioned disparities and an abstraction layer. So besides this, not everything is going to be optimized or usefully optimized by quantum computing or machine learning. For example, if you're using PowerPoint across the network, there's no point in running it on a quantum channel because PowerPoint would work, much better possibly on a classical network. So there would be, going forward, and from a futuristic point of view, there will be specific applications that machine learning will be useful for.

And to go back to your question, to address those, what we'll need is refinement in basic qubit processing. Right now, the way physical qubits are actually created is not perfect. So there needs to be optimization in that process itself, and then obviously we'll need more qubits. Right now, you get about 50, 53 qubits on a real quantum computer. And most of the algorithms that are being tested are testing on simulators. And then if you don't need enough qubits, they'll run on the actual platform. So we actually need that physical layer. We need the abstraction layer.

And then the other option and challenge is obviously scalability. So the problem needs to be meaningful. And in terms of a meaningful problem, we need to scale it. I'll give you an example. So you may have seen on the Internet that there was a traffic optimization study done in Southeast Asia. So the city they picked has humongous traffic congestion. And then they did a machine learning study based on some constraints, such as distance and people's work destination, which way do you get rational traffic, et cetera, et cetera. And then an optimized model, which is mathematically optimized. But if you look at that graph, you see that the machine learning algorithm is sending drivers from one point to another around the city, not through one of the quicker paths possible. Now, they are doing it based on a constraint of time, but how many people do you know who if they can drive from destination A to B in five kilometers is going to take a 20-kilometer roundabout way to save 10 minutes? Not many.

So, we need to find out how practical it is to run these things. Will it actually be a practical solution to the problem in hand? And how to scale these queries in a rush-hour platform. In a financial environment, for example, you may get hundreds of thousands of queries. Now, are all those queries going to be efficiently serviced by quantum machine learning? Or would it be 50:50? Some classical, some quantum? So these are areas where we need more clarity. And more importantly, once you have the clarity, we need more real-time decision-making. So, there should be a layer, be that AI governed, be that manually governed, which would say, okay, this is best served by quantum or, it is best served by classical - go your different ways, and here is some magic. So that's my take on it.

Yuval: As we get closer to the end of our conversation today, we spoke a lot about quantum machine learning, but obviously there are other areas in quantum computing, quantum key distribution, and others. Is there anything in particular that drew you to focus on quantum machine learning?

Santanu: Yes. Fundamentally, quantum computing is of deep interest. Attached to quantum computing is the great question today of security. That is also of deep interest. The other thing of deep interest is quantum error correction. There are aspects of quantum security and quantum error correction, both of which can be addressed by quantum machine learning: Reinforcement learning or supervised learning or unsupervised learning, some such algorithm. And there are research and studies that are ongoing on that. So my interest is tied to specifically these domains and that's where machine learning became interesting because it can actually solve some serious unanswered questions in quantum computing as it exists today.

Yuval: So Santanu, where can people get in touch with you to learn more about your work?

Santanu: Thank you very much. I'm on LinkedIn. If they do a search with my name and Cisco after that, I should pop up. Or quantum after that, I should pop up. Feel free to reach out to me via LinkedIn. And I'm usually quite proactive in answering. I'll be very happy and glad to respond to anybody who has any questions at all. Thank you.

Yuval: That's excellent. I enjoyed very much speaking with you. Thanks so much for joining me today.

Santanu: Thank you very much. Likewise. It has been great talking to you. It's been a pleasure and an honor. Thanks for having me.


Start Creating Quantum Software Without Limits

contact us