Podcasts

Podcast with Sergio Gago, CEO of Qapitan

21
December
,
2021

My guest today is Sergio Gago, CEO of Qapitan Quantum. Sergio and I talk about quantum APIs, how to find the best cloud provider and quantum hardware for a particular quantum algorithm, the price of quantum computing and much more.

Listen to additional podcasts here

THE FULL TRANSCRIPT IS BELOW

Yuval Boger (CMO, Classiq): Hello Sergio, and thanks for joining me today.

Sergio Gago, (CEO, Qapitan):Hi, Yuval. Good morning. Thank you for inviting me.

Yuval: So who are you and what do you do?

Sergio: Well, my name is Sergio. I've been a CTO in classical companies for a couple of decades, both my own companies and others, both in startups, scale-ups and corporates. And I joined the quantum computer movement in a world by creating a company called Qapitan Quantum.

Yuval: And what does Qapitan Quantum do?

Sergio: Qapitan Quantum tries to leverage the same thing there we've seen in the world of AI in the last couple of decades or so by standing on top of soldiers, giant soldiers, as they say. And what we tried to do is follow the same paradigm that we've been seeing since the beginning of times in this sort of world. So at the beginning you need a lot of system administrators, data scientists, data engineers, ML ops, all these type of roles and profiles, trying to put their models and systems in your colos and try to get data sets and try to build models and inferences. And it was extremely costly. And in many cases you would not even get proper inference either because you didn't have enough computer power or you didn't have enough data sets. So, that probably sounds familiar to the stage.

We are now at the moment in quantum computing. We have algorithms, we have lots of research done, but we need more people. We need more qubits or less noisier qubits, and we need more algorithms. People say that we are at the ENIAC levels are in the late sixties. If, we can quantum with classical. I like to think that we're more like in the nineties, in the world of AI and hopefully without an AI winter in the middle. Why? Because in the world of AI, suddenly cloud computing came and it was much easier to build models and create open source libraries, like the ones we have in Python and create community that accelerated very, very fast. The development of that industry, all cloud providers created their own different layers like AWS, GCP, Azure.

And today you have plenty of SaaS solutions. You don't need to build your own system. If you want a spam checker out a natural language processing system, you didn't need your own model or your own data set. You just have an API and then get the results from that. And of course, if you are living in a very specific business, you can go for and build it yourself. But for 99% of the clients out there of the requirements and problems out there, the out of the box solution will be enough in quantum computing. We are going in the same direction. If you're a big bank or a big pharma company, probably you should be already hiring quantum engineers, building your, your internal knowledge and building capacity and get supportive some of the companies in our industry around us.

But if you're a smaller hedge fund, you should still be able to get the benefits of say portfolio optimization or credit risk analysis without having the resources of the scouting with some of what 300, 400 startups in the quantum world today, that is exactly what we're doing at Qapitan Quantum: follow that model on AI and build what we consider in the first marketplace of quantum APIs in a way that any customer can run a very simple API request for any domain specific problem. Whether it is protein folding, credit risk, portfolio optimization, any of the typical use cases that we see every day in the quantum computing world. And then a client will be able to get the state-of-the-art the best potential solution that we have today for the problem size that he brings. And if he wants to solve a portfolio optimization for 20,000 assets, probably a quantum computer will not cut it yet. It will need to be a few years until we are at that state. Those quantum solutions come. And really the question is whether it will be a 2, 3, 5, 10 years, then those solutions will be out of the box for the client as well in this marketplace.

Yuval: So when you say marketplace, does that mean that you guys are not the only ones developing quantum APIs for the marketplace that you expect to bring in algorithms or APIs from other companies, or at the moment is it just Qapitan that's doing the development?

Sergio: I think that there's a few people trying similar things and that's really good news for the industry. We have of course, people Zapata, QC Ware, StrangeWorks, you guys at Classiq as well. We all live in these ends of the value chain for the customers, trying to provide better solutions to the quantum developers, to the algorithm developers, to provide value to the final client. On the other side of the spectrum, we have the hardware providers and right in the middle are the algorithm developers, I think it really depends on where you want to sit in that, in that value chain. And we want to be not just qubit agnostic, but solution agnostic. So at the end of the day, if you're a quantum developer, either one of these 350 quantum consulting companies that are today around the world, instead of spending time and money on commodities, which is how do I run my algorithms?

They have to go somewhere beyond your Jupyter notebook or your demo, or your paper. You have to productize them when you face that problem. You have two options: Either you hire a system administrator or several backend classical backend developers, someone to manage your infrastructure. And then you start doing integrations. That's perfectly fine, perfectly doable, but that's reinventing the wheel over and over again. What we are doing is saying: just put your algorithm this box, and it will run out. It will run out automatically with governance, compliance, control, billing, security, all those issues.

They are not your core business because what you do well is develop, say algorithms. And that is the, the angle that we're taking our companies take different angles. And I think we're all approaching a sweet spot on, providing additional value into the, into the industry. But we tried, we tried to go as far as possible in the value chain in a way that there are going to be 10, 15, 20 different ways of solving a specific problem. And which one does the client want? The cheapest, the fastest or the most accurate, or a combination of the three? We were able to say: well, your problem, your best solver is going to be, you send an annealer or using an ion computer by the developer A or developer B, or using a superconducting qubit, or maybe something that pops up next year that no one knows about. And that's the value that we're trying to provide.

Yuval: Could you give me an example of APIs that you offer today? I think I can think about random number generation, that's an easy API. That may be something that you can get almost instantaneously, but what other APIs do you offer today?

Sergio: Yeah, so the quantum number random number generation is like our “hello world” program is what we used to teach the algorithm developers how to use the platform and how to upload a new solver as we call them into the platform, the solver is linked to a domain problem. It could be finance, it could be pharma, it could be others. So we cover the three typical ones in finance currency, arbitrage, credit risk, and portfolio optimization. And then we've provided variational algorithms for chemistry problems. And we offer a QML algorithms for classification, but really what matters is how can you put your algorithms in the system in order to do that? We have three frameworks that allow the developer to do this in a much quicker way. So for example, the one that's most advanced and the one that provides better solutions today is a QUBO framework.

A lot of combinatorial problems can be modeling in the shape of a QUBO. So as long as you can map your problem or your output with them in the quadratic binary form, everything else is taken care of for you. So no matter how many providers come afterwards, it goes directly into the platform. So you don't have to choose the vendor A or vendor B, it's already embedded into it. You can build it from scratch as well. If you want put in your own account with whichever hardware company you want, but you can also leverage these frameworks and these SDKs, that reduce your development time to the bare minimum, but also to the thing that matters. So to answer your question, we're not really developing new algorithms, and that drives my team nuts because they are quantum engineers, but we're not creating new science here.

I don't think I'm smart enough to do that myself, where we are doing is trying to leverage those who create these new signs, these new algorithms, and create a benchmark platform for them to use it. So the API platform is the long-term project for us is the long-term game, but it will be some time until we reach that state. At the end of the day, who wants to use an API to solve a portfolio optimization of 50, 60, 80 assets. You want to benchmark your solutions against other solutions, classical quantum or otherwise, and you also want to distribute them. What if instead of spending valuable time on doing these commodity platform or architecture, let me just plug in my platform in your cloud provider, colo, anywhere you want, and you just have to work on your work with them. The only thing that you need to give your client is this endpoint, this API that allows them to build any custom model that you, that you work with.

Yuval: Certain quantum algorithms can run on different quantum computers. Not every algorithm can run on every computer, depending on the number of qubits and the connectivity and so on. Different cloud providers and different quantum hardware providers have different pricing. How do you choose the best provider and best hardware for a particular client?

Sergio: It took us quite some time to figure out the model of prioritize and solvers. So imagine we have, for some specific problems, we have 20, 25 different solvers, different algorithms that solve the same problem, and some of them are classical. And as you say, if you tried to solve it this way, with this amount of qubits or with this topology of qubits, you're going to be able to solve a problem this big or this small. Now, if the problem is a small, the accuracy might be much better if the problem is bigger. So we've created a benchmark model based on three variables: cost, time and accuracy, and then a combination of weighted combination of them all. What we see is that some people tell us: well, I'm not really looking for the most accurate solution, as long as it's decent.

I want something that's incredibly accurate. I don't mind how much it costs as long as you give me the best solution of them all. So this allows us to do things like, let me run your problem against the 20 or 25 solvers. And some of them will take longer than others because there are queues and things like that. So sometimes the final client takes three hours to get the answer back of all the solvers, but he will get the histogram of all the different options, all the different solvers and ranking them against each other. Maybe someone says, I want just the cheapest option. Maybe my option requires spinning up hundreds of servers to train a neural network and then use the model to create inferences the classical AI way. And that's going to cost some money as well.

Or you can run this less accurate version on this specific computer that runs for cheaper, at least on the algorithmic time. So we can allow the customer to decide what is their priority, or just, you start as an assembled model to get what's the best of breed solution as we move forward. Now, what's interesting is how our industry is evolving. For example, today we use variational algorithms because that's one of the best solutions we have for the NISQ era, when we have noisy computers with not a lot of qubits, but is that model going to work the same, say in five years from now, or even in three years from now?

I think that we will change a lot, the type of algorithms that we use when, when we don't have the same limitations that we have today. So maybe a variation algorithms become a thing of the past, and maybe someone will hate me for saying this. That's building the machine that we're trying to build today. So that's needed, and we need to go step by step. But I don't think we will be using variational algorithms in 10 years from now. So all these work that we have invested in the platform is wasted? No, because you have an abstraction layer that works, that abstracts you from that, that hides you from all that complexity.

Yuval: So just to clarify what I think I heard. If I have an algorithm, I could submit it to you. You could initially run it in the example that you gave on 20, 25 different algorithms. You would give me a report showing the cost and performance and response time with each. And then based on that, I can choose the best algorithm for me, and then use that in production. Is that about right?

Sergio: That's one way of working, but the most common way is you're saying, I want to solve this problem. Give me the best solution you have. And that will give you one and just one solution. Then maybe in five month’s time, there's a new algorithm using the new computer by some hardware provider or an allegory thing that has been updated by the developer. And it will start ranking higher. Imagine like the like Google search results, that's the dynamic thing. And then, and it's alive and results keep changing for this specific keyword. So as the consumer, you will just integrate with this API which has a specific contract, and you can integrate with your own pipelines. You're going to get the best of breed solution from all the platforms, or you can do as well as exactly what you say, give me everything. And now we'll decide which one, which result I want to keep.

So it becomes effectively a benchmarking platform that you can use to say: please compare for me all the different solvers, annealing, superconducting using company A company B company C, for this specific same problem for everyone, of course, the same algorithm can be implemented in different ways. So you can have one company and you focused on the financial industry who uses one well-known published algorithm that performs better, than another company, just because they've done some fine tuning or tweaking on their solver. That's fine, and that is something that we want to, that we want to leverage as well. That's from the end consumer. Now, if you're an algorithm developer, you look at the platform from the other side of the marketplace, from the other side, I can see all the problems that are available on the platform, or even suggest new problems the main problems. And I can say: here is my solved for this problem.

I'm going to get the details, the data from the user in this way, in this format, I can use these frameworks that I mentioned before, or build my own thing from scratch, put it in the platform and then we do some revenue share on each API call that we execute. You can use our own agreements with the hardware providers, or you can use your own, it depends on your own requirements and define a company on the hardware provider as well and then your solver comes into the platform. What can happen maybe in one year time, your solver becomes obsolete because they had hardware platform you're running is no more. We see this pretty much every month or every quarter, hardware companies that depracate their systems, or they changed the way their queue processes, or they change the way they build, many things can change. So the only thing you need to do is make a new commit to your repository, to your Github repository, and that automatically updates the system and the platform and validates it and builds it. It becomes essentially a continuous integration platform for your quantum algorithms. So you can use it as testing, benchmark, delivery and distribution. 

Yuval: Let's talk a little bit about predictions. When you look at the classical world and the big cloud providers, AWS, or Azure or Google cloud, you can use their services in multiple ways. One is you can just buy capacity “All I want is an EC2 server and that's it”, or you can use an API for NLP or for geo tagging or whatever it is. Which do you think will become the prevalent option for quantum? Is it selling capacity or is it selling API that performs a certain quantum service?

Sergio: I think at the end of the day, everything is selling capacity when you look at it from the cloud provider perspective. So take AWS for example, and that's actually one of the stories I use for, for explaining Qapitan. You could use Sagemaker, there's this tool for data scientists that gives build their notebooks and their models. And you don't need system administrators, or you can just go on integrate with these APIs to do NLP or to do text any type of text analysis, text-to-speech, speech-to-text transcription, all these types of things. But at the end of the day, what AWS wants is you using their servers with different layers of abstraction and different layers of building business corporate ends. But at the end of the day, you're using their servers. You can bill, or you can pay by machine.

You can pay by script executer. You can pay by API, like in the world of offset of serverless. But at the end, everything is selling capacity. I think the big players, the ones you mentioned and beyond they're in that game on building lock-ins and creating their own modes. So we buy their capacity, at least until we figure out what's going to be the winning architecture that takes them all. There's at the end of the day, you only one, superconductors win, or photonic computers win maybe there's one, maybe there's two, maybe there are several, but I think it's safe to assume, but there's going to be a place there's going to be one maximum, two dominant architectures in quantum computers. Whether that's going to be in five, 10 or 15 years. I won’t risk to make it that. So what AWS, Google, IBM of course, what they're doing is placing their bets and trying to sell capacity. On the other side of the spectrum. If you're a consultant, if you're a quantum consulting company, you play with the same economics with the same numbers, that classical software consulting companies. A little bit more risky and with a little bit more uncertainty, but at the end of the day, the economics are the same. You're selling quantum engineers, try to productize it as-best-as possible. Try to, optimize your processes and prototype some things and then abstract some things. But you're selling projects: time and materials, if you will.

So right in the middle is selling APIs. What we try to sell is not a capacity. We are trying to sell intelligence on demand. So the capacity is still there and IBM will sell their quantum minutes and Google and AWS and they will build their own computers. Then the consulting companies will build better algorithms will build better modes, we'll get bigger contracts, but still they rely on the hardware providers, I think right in the middle is where companies like Classiq, like StrangeWorks, try to make a difference on how to bring those two together.

Yuval: And if you're thinking about next year, about 2022, what would your predictions be for the quantum computing industry in 2022?

Sergio: That's a tough one, because then you can, you can come back to the podcast and be, listen and say, oh, look at that guy. He was completely clueless on what he was saying, but I'll make a bet. I think considering how exponentially is everything we're doing, how many people are now working on our industry and creating breakthroughs all the time. I think we're going to be very creative on finding, not necessarily quantum advantage or quantum supremacy in the sense of it, but we will start seeing benefits of incorporating quantum algorithms into systems. Again, from the scientific perspective, from the research perspective, we talk about grades, degrees of magnitude better, right? This has to be exponentially better exponentially, faster, exponentially cheaper, but there's a lot of grey in the middle when you don't have to be necessarily better than your classical counterpart, but you're actually better for the environment or you're cheaper to execute, or you don't need as much footprint or you would a little bit more accurate or you can combine things together.

And there's a lot of research and the studies popping up now on that integration, I think that's what we will start saying next year, that from the scientific side. Then from the industrial side, I believe we're going into consolidation. So my bet is that there's going to be quite a few more movements. Like the ones that we've seen already this year on companies getting together, doing more things together, either temporarily for public funds, if you will, or for longer periods like mergers, acquisitions and whatnot and growth on the investment side. So a bright future, I think.

Yuval: Excellent. So as we get closer to the end of our discussion, I have one more question on pricing. Do you feel that the pricing, the price of quantum computer or the price of using a quantum computer is already competitive to the point where people can start thinking about moving into production or is it just totally expensive, and it's just an exploration at this point?

Sergio: I would say the latter. It is pure explorative or it's building blocks. Executing quantum algorithms is very expensive, not just because of the pay per minute or the cost per minute that you can have in front of your algorithms, but it's on, who's running them and how this is changing. But your model today is going to be completely outdated in three months. When someone publishes a new paper, destroying the algorithm you use before, and then proposing something completely different. And that's fine, that's, that's how you run in deep tech and how you advance the industry. But in that sense, it's actually more expensive based on that, uncertainty that on the minute price or in the platform price, you have to do it because you have to keep moving, or keep running one step at a time. So all in all it is expensive. It's only for, I guess, bigger companies at the moment what we're trying to do is democratize it or humanize it, so smaller companies and everyone can get the benefit of quantum when we reached that inflection point.

Yuval: So Sergio, how can people get in touch with you to learn more about your work?

Sergio: So you can find me easily on LinkedIn I'm Sergio Gago. He can find Qapitan.com for our company. You can also follow me on Twitter @PirateCTO. That's what I've been doing for, for a while. And yeah, if you put Sergio Gago, I'm easy to find.

Yuval: That's excellent. Thank you so much for joining me today.

Sergio: Thank you all for inviting pleasure to be here.


My guest today is Sergio Gago, CEO of Qapitan Quantum. Sergio and I talk about quantum APIs, how to find the best cloud provider and quantum hardware for a particular quantum algorithm, the price of quantum computing and much more.

Listen to additional podcasts here

THE FULL TRANSCRIPT IS BELOW

Yuval Boger (CMO, Classiq): Hello Sergio, and thanks for joining me today.

Sergio Gago, (CEO, Qapitan):Hi, Yuval. Good morning. Thank you for inviting me.

Yuval: So who are you and what do you do?

Sergio: Well, my name is Sergio. I've been a CTO in classical companies for a couple of decades, both my own companies and others, both in startups, scale-ups and corporates. And I joined the quantum computer movement in a world by creating a company called Qapitan Quantum.

Yuval: And what does Qapitan Quantum do?

Sergio: Qapitan Quantum tries to leverage the same thing there we've seen in the world of AI in the last couple of decades or so by standing on top of soldiers, giant soldiers, as they say. And what we tried to do is follow the same paradigm that we've been seeing since the beginning of times in this sort of world. So at the beginning you need a lot of system administrators, data scientists, data engineers, ML ops, all these type of roles and profiles, trying to put their models and systems in your colos and try to get data sets and try to build models and inferences. And it was extremely costly. And in many cases you would not even get proper inference either because you didn't have enough computer power or you didn't have enough data sets. So, that probably sounds familiar to the stage.

We are now at the moment in quantum computing. We have algorithms, we have lots of research done, but we need more people. We need more qubits or less noisier qubits, and we need more algorithms. People say that we are at the ENIAC levels are in the late sixties. If, we can quantum with classical. I like to think that we're more like in the nineties, in the world of AI and hopefully without an AI winter in the middle. Why? Because in the world of AI, suddenly cloud computing came and it was much easier to build models and create open source libraries, like the ones we have in Python and create community that accelerated very, very fast. The development of that industry, all cloud providers created their own different layers like AWS, GCP, Azure.

And today you have plenty of SaaS solutions. You don't need to build your own system. If you want a spam checker out a natural language processing system, you didn't need your own model or your own data set. You just have an API and then get the results from that. And of course, if you are living in a very specific business, you can go for and build it yourself. But for 99% of the clients out there of the requirements and problems out there, the out of the box solution will be enough in quantum computing. We are going in the same direction. If you're a big bank or a big pharma company, probably you should be already hiring quantum engineers, building your, your internal knowledge and building capacity and get supportive some of the companies in our industry around us.

But if you're a smaller hedge fund, you should still be able to get the benefits of say portfolio optimization or credit risk analysis without having the resources of the scouting with some of what 300, 400 startups in the quantum world today, that is exactly what we're doing at Qapitan Quantum: follow that model on AI and build what we consider in the first marketplace of quantum APIs in a way that any customer can run a very simple API request for any domain specific problem. Whether it is protein folding, credit risk, portfolio optimization, any of the typical use cases that we see every day in the quantum computing world. And then a client will be able to get the state-of-the-art the best potential solution that we have today for the problem size that he brings. And if he wants to solve a portfolio optimization for 20,000 assets, probably a quantum computer will not cut it yet. It will need to be a few years until we are at that state. Those quantum solutions come. And really the question is whether it will be a 2, 3, 5, 10 years, then those solutions will be out of the box for the client as well in this marketplace.

Yuval: So when you say marketplace, does that mean that you guys are not the only ones developing quantum APIs for the marketplace that you expect to bring in algorithms or APIs from other companies, or at the moment is it just Qapitan that's doing the development?

Sergio: I think that there's a few people trying similar things and that's really good news for the industry. We have of course, people Zapata, QC Ware, StrangeWorks, you guys at Classiq as well. We all live in these ends of the value chain for the customers, trying to provide better solutions to the quantum developers, to the algorithm developers, to provide value to the final client. On the other side of the spectrum, we have the hardware providers and right in the middle are the algorithm developers, I think it really depends on where you want to sit in that, in that value chain. And we want to be not just qubit agnostic, but solution agnostic. So at the end of the day, if you're a quantum developer, either one of these 350 quantum consulting companies that are today around the world, instead of spending time and money on commodities, which is how do I run my algorithms?

They have to go somewhere beyond your Jupyter notebook or your demo, or your paper. You have to productize them when you face that problem. You have two options: Either you hire a system administrator or several backend classical backend developers, someone to manage your infrastructure. And then you start doing integrations. That's perfectly fine, perfectly doable, but that's reinventing the wheel over and over again. What we are doing is saying: just put your algorithm this box, and it will run out. It will run out automatically with governance, compliance, control, billing, security, all those issues.

They are not your core business because what you do well is develop, say algorithms. And that is the, the angle that we're taking our companies take different angles. And I think we're all approaching a sweet spot on, providing additional value into the, into the industry. But we tried, we tried to go as far as possible in the value chain in a way that there are going to be 10, 15, 20 different ways of solving a specific problem. And which one does the client want? The cheapest, the fastest or the most accurate, or a combination of the three? We were able to say: well, your problem, your best solver is going to be, you send an annealer or using an ion computer by the developer A or developer B, or using a superconducting qubit, or maybe something that pops up next year that no one knows about. And that's the value that we're trying to provide.

Yuval: Could you give me an example of APIs that you offer today? I think I can think about random number generation, that's an easy API. That may be something that you can get almost instantaneously, but what other APIs do you offer today?

Sergio: Yeah, so the quantum number random number generation is like our “hello world” program is what we used to teach the algorithm developers how to use the platform and how to upload a new solver as we call them into the platform, the solver is linked to a domain problem. It could be finance, it could be pharma, it could be others. So we cover the three typical ones in finance currency, arbitrage, credit risk, and portfolio optimization. And then we've provided variational algorithms for chemistry problems. And we offer a QML algorithms for classification, but really what matters is how can you put your algorithms in the system in order to do that? We have three frameworks that allow the developer to do this in a much quicker way. So for example, the one that's most advanced and the one that provides better solutions today is a QUBO framework.

A lot of combinatorial problems can be modeling in the shape of a QUBO. So as long as you can map your problem or your output with them in the quadratic binary form, everything else is taken care of for you. So no matter how many providers come afterwards, it goes directly into the platform. So you don't have to choose the vendor A or vendor B, it's already embedded into it. You can build it from scratch as well. If you want put in your own account with whichever hardware company you want, but you can also leverage these frameworks and these SDKs, that reduce your development time to the bare minimum, but also to the thing that matters. So to answer your question, we're not really developing new algorithms, and that drives my team nuts because they are quantum engineers, but we're not creating new science here.

I don't think I'm smart enough to do that myself, where we are doing is trying to leverage those who create these new signs, these new algorithms, and create a benchmark platform for them to use it. So the API platform is the long-term project for us is the long-term game, but it will be some time until we reach that state. At the end of the day, who wants to use an API to solve a portfolio optimization of 50, 60, 80 assets. You want to benchmark your solutions against other solutions, classical quantum or otherwise, and you also want to distribute them. What if instead of spending valuable time on doing these commodity platform or architecture, let me just plug in my platform in your cloud provider, colo, anywhere you want, and you just have to work on your work with them. The only thing that you need to give your client is this endpoint, this API that allows them to build any custom model that you, that you work with.

Yuval: Certain quantum algorithms can run on different quantum computers. Not every algorithm can run on every computer, depending on the number of qubits and the connectivity and so on. Different cloud providers and different quantum hardware providers have different pricing. How do you choose the best provider and best hardware for a particular client?

Sergio: It took us quite some time to figure out the model of prioritize and solvers. So imagine we have, for some specific problems, we have 20, 25 different solvers, different algorithms that solve the same problem, and some of them are classical. And as you say, if you tried to solve it this way, with this amount of qubits or with this topology of qubits, you're going to be able to solve a problem this big or this small. Now, if the problem is a small, the accuracy might be much better if the problem is bigger. So we've created a benchmark model based on three variables: cost, time and accuracy, and then a combination of weighted combination of them all. What we see is that some people tell us: well, I'm not really looking for the most accurate solution, as long as it's decent.

I want something that's incredibly accurate. I don't mind how much it costs as long as you give me the best solution of them all. So this allows us to do things like, let me run your problem against the 20 or 25 solvers. And some of them will take longer than others because there are queues and things like that. So sometimes the final client takes three hours to get the answer back of all the solvers, but he will get the histogram of all the different options, all the different solvers and ranking them against each other. Maybe someone says, I want just the cheapest option. Maybe my option requires spinning up hundreds of servers to train a neural network and then use the model to create inferences the classical AI way. And that's going to cost some money as well.

Or you can run this less accurate version on this specific computer that runs for cheaper, at least on the algorithmic time. So we can allow the customer to decide what is their priority, or just, you start as an assembled model to get what's the best of breed solution as we move forward. Now, what's interesting is how our industry is evolving. For example, today we use variational algorithms because that's one of the best solutions we have for the NISQ era, when we have noisy computers with not a lot of qubits, but is that model going to work the same, say in five years from now, or even in three years from now?

I think that we will change a lot, the type of algorithms that we use when, when we don't have the same limitations that we have today. So maybe a variation algorithms become a thing of the past, and maybe someone will hate me for saying this. That's building the machine that we're trying to build today. So that's needed, and we need to go step by step. But I don't think we will be using variational algorithms in 10 years from now. So all these work that we have invested in the platform is wasted? No, because you have an abstraction layer that works, that abstracts you from that, that hides you from all that complexity.

Yuval: So just to clarify what I think I heard. If I have an algorithm, I could submit it to you. You could initially run it in the example that you gave on 20, 25 different algorithms. You would give me a report showing the cost and performance and response time with each. And then based on that, I can choose the best algorithm for me, and then use that in production. Is that about right?

Sergio: That's one way of working, but the most common way is you're saying, I want to solve this problem. Give me the best solution you have. And that will give you one and just one solution. Then maybe in five month’s time, there's a new algorithm using the new computer by some hardware provider or an allegory thing that has been updated by the developer. And it will start ranking higher. Imagine like the like Google search results, that's the dynamic thing. And then, and it's alive and results keep changing for this specific keyword. So as the consumer, you will just integrate with this API which has a specific contract, and you can integrate with your own pipelines. You're going to get the best of breed solution from all the platforms, or you can do as well as exactly what you say, give me everything. And now we'll decide which one, which result I want to keep.

So it becomes effectively a benchmarking platform that you can use to say: please compare for me all the different solvers, annealing, superconducting using company A company B company C, for this specific same problem for everyone, of course, the same algorithm can be implemented in different ways. So you can have one company and you focused on the financial industry who uses one well-known published algorithm that performs better, than another company, just because they've done some fine tuning or tweaking on their solver. That's fine, and that is something that we want to, that we want to leverage as well. That's from the end consumer. Now, if you're an algorithm developer, you look at the platform from the other side of the marketplace, from the other side, I can see all the problems that are available on the platform, or even suggest new problems the main problems. And I can say: here is my solved for this problem.

I'm going to get the details, the data from the user in this way, in this format, I can use these frameworks that I mentioned before, or build my own thing from scratch, put it in the platform and then we do some revenue share on each API call that we execute. You can use our own agreements with the hardware providers, or you can use your own, it depends on your own requirements and define a company on the hardware provider as well and then your solver comes into the platform. What can happen maybe in one year time, your solver becomes obsolete because they had hardware platform you're running is no more. We see this pretty much every month or every quarter, hardware companies that depracate their systems, or they changed the way their queue processes, or they change the way they build, many things can change. So the only thing you need to do is make a new commit to your repository, to your Github repository, and that automatically updates the system and the platform and validates it and builds it. It becomes essentially a continuous integration platform for your quantum algorithms. So you can use it as testing, benchmark, delivery and distribution. 

Yuval: Let's talk a little bit about predictions. When you look at the classical world and the big cloud providers, AWS, or Azure or Google cloud, you can use their services in multiple ways. One is you can just buy capacity “All I want is an EC2 server and that's it”, or you can use an API for NLP or for geo tagging or whatever it is. Which do you think will become the prevalent option for quantum? Is it selling capacity or is it selling API that performs a certain quantum service?

Sergio: I think at the end of the day, everything is selling capacity when you look at it from the cloud provider perspective. So take AWS for example, and that's actually one of the stories I use for, for explaining Qapitan. You could use Sagemaker, there's this tool for data scientists that gives build their notebooks and their models. And you don't need system administrators, or you can just go on integrate with these APIs to do NLP or to do text any type of text analysis, text-to-speech, speech-to-text transcription, all these types of things. But at the end of the day, what AWS wants is you using their servers with different layers of abstraction and different layers of building business corporate ends. But at the end of the day, you're using their servers. You can bill, or you can pay by machine.

You can pay by script executer. You can pay by API, like in the world of offset of serverless. But at the end, everything is selling capacity. I think the big players, the ones you mentioned and beyond they're in that game on building lock-ins and creating their own modes. So we buy their capacity, at least until we figure out what's going to be the winning architecture that takes them all. There's at the end of the day, you only one, superconductors win, or photonic computers win maybe there's one, maybe there's two, maybe there are several, but I think it's safe to assume, but there's going to be a place there's going to be one maximum, two dominant architectures in quantum computers. Whether that's going to be in five, 10 or 15 years. I won’t risk to make it that. So what AWS, Google, IBM of course, what they're doing is placing their bets and trying to sell capacity. On the other side of the spectrum. If you're a consultant, if you're a quantum consulting company, you play with the same economics with the same numbers, that classical software consulting companies. A little bit more risky and with a little bit more uncertainty, but at the end of the day, the economics are the same. You're selling quantum engineers, try to productize it as-best-as possible. Try to, optimize your processes and prototype some things and then abstract some things. But you're selling projects: time and materials, if you will.

So right in the middle is selling APIs. What we try to sell is not a capacity. We are trying to sell intelligence on demand. So the capacity is still there and IBM will sell their quantum minutes and Google and AWS and they will build their own computers. Then the consulting companies will build better algorithms will build better modes, we'll get bigger contracts, but still they rely on the hardware providers, I think right in the middle is where companies like Classiq, like StrangeWorks, try to make a difference on how to bring those two together.

Yuval: And if you're thinking about next year, about 2022, what would your predictions be for the quantum computing industry in 2022?

Sergio: That's a tough one, because then you can, you can come back to the podcast and be, listen and say, oh, look at that guy. He was completely clueless on what he was saying, but I'll make a bet. I think considering how exponentially is everything we're doing, how many people are now working on our industry and creating breakthroughs all the time. I think we're going to be very creative on finding, not necessarily quantum advantage or quantum supremacy in the sense of it, but we will start seeing benefits of incorporating quantum algorithms into systems. Again, from the scientific perspective, from the research perspective, we talk about grades, degrees of magnitude better, right? This has to be exponentially better exponentially, faster, exponentially cheaper, but there's a lot of grey in the middle when you don't have to be necessarily better than your classical counterpart, but you're actually better for the environment or you're cheaper to execute, or you don't need as much footprint or you would a little bit more accurate or you can combine things together.

And there's a lot of research and the studies popping up now on that integration, I think that's what we will start saying next year, that from the scientific side. Then from the industrial side, I believe we're going into consolidation. So my bet is that there's going to be quite a few more movements. Like the ones that we've seen already this year on companies getting together, doing more things together, either temporarily for public funds, if you will, or for longer periods like mergers, acquisitions and whatnot and growth on the investment side. So a bright future, I think.

Yuval: Excellent. So as we get closer to the end of our discussion, I have one more question on pricing. Do you feel that the pricing, the price of quantum computer or the price of using a quantum computer is already competitive to the point where people can start thinking about moving into production or is it just totally expensive, and it's just an exploration at this point?

Sergio: I would say the latter. It is pure explorative or it's building blocks. Executing quantum algorithms is very expensive, not just because of the pay per minute or the cost per minute that you can have in front of your algorithms, but it's on, who's running them and how this is changing. But your model today is going to be completely outdated in three months. When someone publishes a new paper, destroying the algorithm you use before, and then proposing something completely different. And that's fine, that's, that's how you run in deep tech and how you advance the industry. But in that sense, it's actually more expensive based on that, uncertainty that on the minute price or in the platform price, you have to do it because you have to keep moving, or keep running one step at a time. So all in all it is expensive. It's only for, I guess, bigger companies at the moment what we're trying to do is democratize it or humanize it, so smaller companies and everyone can get the benefit of quantum when we reached that inflection point.

Yuval: So Sergio, how can people get in touch with you to learn more about your work?

Sergio: So you can find me easily on LinkedIn I'm Sergio Gago. He can find Qapitan.com for our company. You can also follow me on Twitter @PirateCTO. That's what I've been doing for, for a while. And yeah, if you put Sergio Gago, I'm easy to find.

Yuval: That's excellent. Thank you so much for joining me today.

Sergio: Thank you all for inviting pleasure to be here.


See Also

No items found.

Start Creating Quantum Software Without Limits

contact us