Aiming to become the global leader in chip-scale photonic solutions by deploying Optical Interposer technology to enable the seamless integration of electronics and photonics for a broad range of vertical market applications

Free
Message: Phil Alsop of Pic Magazine interview with Suresh Venkatesan

AI-generated transcript:

 Okay. So good place to start would just be with a little bit of an intro to Poet Technologies, the sort of when, why formed, any key milestone to date, that kind of information, please. Sure. You know, today, Poet is a publicly traded company on the Nasdaq, but it has been a publicly traded company on the Toronto Stock Exchange, I would say, for a couple of decades now. And it it reversed into a solar company, and it's had it's it's had its life cycle associated with various technologies. But, really, the focus on integrated photonics, you know, was something that I brought into the company when I joined the company in 2015. And, we reinvented the company at that point in time to having a a focus on creating differentiation through integration. Right? And that was kind of our real theme was to differentiate ourselves in this world of photonics through creating integration solutions for, for optics and electronics. And, and and so the key milestone for the company really was around 2018 when we invented the whole concept around the optical interposer, pivoted the company towards, a silicon based integration approach versus what existed in the company before I joined. And, and basically relaunching the company around what we now call the Poet Optical Interposer. So we maintain the name which, originally stood for Planar Optoelectronic Technologies because we still think that what we do is optoelectronic in nature, and what we do is integrated in nature. And the fundamental vision and mission of the company associated with integration hasn't gone away. But the implementation of the methodologies in creating these integrated solutions is what changed. And we've now generated a ton of IP, you know, over 60 patents, as well as a lot of differentiation in the process technology associated with this integration. So, you know, we've come a long ways. Although the company itself has been in existence for a couple of decades, what we're doing now is is vastly more, exciting. You know, and and and particularly with the advent of AI, as we've been developing these concepts around the aero poser, it dovetailed perfectly with the needs of the AI market where there's this huge explosion in terms of the amount of data generated and consumed, and all of that data needed needing to be communicated optically, which then drove the need for integration, which then happened to fit perfectly well with our integration vision that we had for photonics. So, you know, after a few years of kinda discussing interposers and integration, suddenly, there was a market demand that kind of dovetailed perfectly with what it is that we were developing. And so we're now currently in this phase of, you know, very high, you know, interest and demand for what it is that we're doing. And so as a company, we're kind of really, you know, refocusing our efforts in this area. Okay. Yeah. And you referenced a couple of times there the optical interposer. So perhaps you can just give us a bit of a flavor of what that is and the and the benefits it brings in. It sounds as if you're sort of continuously developing it. So, yeah, what you can share from maybe when it started out and to what it has become today. Yeah. You know, the the world's been, you know, spending a lot of resources and time on on what are called, silicon photonics. And these these are basically the concept of using silicon wafers and silicon wave guides to provide some form of integration in optics. And that's been around for, I'd say, now, what, 23 to 25 years. You know? And, and it has its advantages, and it has its challenges. Likewise, there were, you know, technologies that have been developed, you know, decades ago around what are called planar light wave circuits or PLCs, which used, you know, different waveguide materials, but they were largely passive in nature, and largely only optically focused. I think what we did at Poet was, you know, take marry the best of both worlds. Right? So we basically took all of the advantages of silicon processing and CMOS processing and combined it with all of the benefits of what are called planar light based circuits and created these interposers, which are essentially a advanced passive substrate for cointegration of electronics and optics. So they have, you know, multiple layers of electrical interconnects, multiple layers of optical interconnects built into a substrate for a purpose built application. And then we hybrid integrate or hybrid bond, components onto it. Not very dissimilar from components getting onto a printed circuit board or components getting into a multi chip module. Likewise, here, we have components going on to our optical interposer. The key difference is that this optical interposer enables both electrical and optical components to be placed on it, whereas multi chip modules or PCBs largely cater to the world of electronics. So our big breakthrough or differentiation was to be able to create these substrates, if you will, or interposers, if you will, that are are purpose built for applications, but at the same time, have a cost structure and an economy of scale associated with silicon processing. And and so we create kind of this this benefit, of silicon based integration and growth and volumes and cost and marry with it the the the principles of optics, assembly, which is not that easy to do because optics assembly requires a lot more things to be considered in, in you know? And that's why it's always been somewhat boutique in nature for years. But, you know, now we're kinda demystifying, if you will, what optics assembly needs to look like and and are able to create a platform for this integration. Okay. And alongside that, I think you also do or have done some development work around photonic integrated circuits or or PICCs, I guess, they have more. So, again, can you just share the sort of work that you've been, you know, doing on those specifically? Sure. Yeah. So the the optical interposer, if you kinda go back to my, you know, previous, you know, answer, is is is largely a substrate. And and making that substrate into an integrated circuit, if you will, is is the act of assembling, components onto the substrate for specific applications. And then the substrate is the that supports various market applications. Of course, the big ones that we're really focused on these these days is 800 gigabits per second, 1.60 and 3.2 t. So that that that and that's largely because that's where the largest demand is today driven by the needs of AI. While we have solutions so we initially started developing this technology back in 2018, and, you know, 800 gig wasn't in existence at that point in time. And we had a lot of fundamental development to do. So we did a lot of that development around a 100 gigabits per second and and, you know, and and validated that the technology works, meets market requirements. And we do have a few customers at a 100 gigabits per second that are kind of vetting the technology, if you will, in the manufacturing environment, and and we've been working on that. But the but the base technology we've developed is what we would call a platform. So that means, you know, you can interchange the components onto that platform and then very quickly move up in speed. So we've been able to, you know, leapfrog from a 100 gigabits per second to 800 gigabits per second, which we did over the past year and a half. And and we've got a lot of customers, now excited in our 800 gigabits per second solution because that's a market, that's just nascent today, within very explosive growth relative to AI and then a rapid transition to 1.60 and 3.2 t. So we're really focused on that. And so our photonic integrated circuits today are largely catering to customers in that space, which is 800 gig, 1.6 terabits per second, and 3.2 terabits per second. And we're, you know, one of the few companies in the world that, actually demonstrated, what are called 200 gigabit per lane solutions, and those are solutions that are necessary to extend the road map to 3.2 terabits. So, that that's something that's really exciting for us. I think, for the first time, I would say, as a company, we are, you know, on the leading edge of what it is that, you know, the hyperscalers need. And so we're excited to be partnering with both customers as well as partners to demonstrate our solutions in that space through these PICs or or photonic integrated circuits. And just following that up, I mean, in terms of often when it comes to the these kind of, technologies, scaling can be the problem. But you you have, as you say, you've got a road map. You don't have any anticipate any particular challenges in getting up. I think you mentioned the 3.2 speeds. You can use more or less exactly the same solution and just, get get the speeds up, or or does there come a point where it gets a bit more complicated to to continue, you know Well, I mean, you know, the there there's the component piece, which we rely on our partners to develop, and then there's the interposer, which, you know, is a platform. And it does scale, and it's somewhat speed agnostic from that perspective. I mean, I wouldn't say that it is no work, but it is relatively, less heavy lifting, if you will, to move from one speed to the other. If you're doing a module the conventional way, I mean, every change, whether it's 800 gig d r 4 or f r 4 or l r 4 or e r 4 or 1.60 is a grounds up design. You know, you just kinda start over, because the number of components you have to integrate, whether it has a multiplexer, demultiplexer, how you align the fibers and lenses, all of that changes. And so most people that make these modules on a conventional manner, it's a ground up design. For us, you buy an 800 gig engine, the module design remains exactly the same. We take care of all of the optical complexity on the aeropposer itself. So, basically, the same chip or different chips. So they buy an d r 4 chip, an f r 4 chip, and an r 4 chip, for example. Their module remains exactly the same. All they're doing is replacing the chip on that module, and you have a completely new product. So the the the time to market and and the engineering resources that our customers need to put in to have these different form factors is dramatically reduced by using our solutions because we take care of the optical path that they don't now need to worry about. And and we, of course, do it, you know, using integration and platform. So for us, it's relatively straightforward to spin a lot of these different variants, and it's relatively easier for our customers to use them. So that's why, you know, we've we've got engagements with Luxshare, for example, which is a Taiwanese customer. Foxconn, of course, is a is a household name in in in the space. And and they're engaged with us on a multigenerational development, which is starting at 800 gig DRFR, migrating to 1.60 DRFR and and beyond. Right? So and and and they view that as as an advantage for them to use our solutions because it is inherently multigenerational and multi form factor, and that they recognize that one investment in one product line of ours basically expands, across a multitude of different, you know, product lines and options. And so we're really excited with, with those design wins, and and, of course, there are, there are others that, you know, we're, keenly working on. The other thing we've noticed, especially with integration, is, you know, in the 100 gig 4 years ago, you know, people weren't didn't care too much about what we were doing because, you know, we were far ahead of the time. And and and what we were offering was maybe an overkill for a for a 100 gigabit, you know, application. But today, as we migrate to 1.6 terabits, there are some really thorny problems like crosstalk, EMI, you know, that come into play. And with our integrated solutions, you know, some of those issues are mitigated. So more and more people are now looking towards interposer solutions as the way to go going forward. And in fact, at this year's OFC, several other companies are now starting to talk about interposers or fabrics or you know, I mean, things that we've called an interposer back in 2018. I think there's just a resurgence of the use of that phrase and and largely through a recognition of the fact that that kind of integration is gonna need to be important. So we do expect that, you know, from a company that was singularly, in in the industry to an interposers, there are gonna be more and more people wanting to look at, you know, solutions that are interposer like. But we feel like we have an advantage because we've started early. We have an advantage because, architecturally, we have done things that make it more of a platform solution as opposed to something that, you know, is is very, very application specific and has to be ripped up and redone every time you change. And so, you know, we we feel, you know, we feel like we're in a really good position now because the industry finally has caught on to the fact that the solutions we've been developing over the past 4 or 5 years is in fact the approach that most people are taking, 3 d integration in photonics, 3 d assembly techniques, and photonics using interposers. So this is something that, you know, I'm I'm happy that the the industry is cashing on. It's it provides us with a lot of opportunity, but, of course, a lot of competition as well. Yeah. And I know you also talk about, wanting to bring, or just the the company's mission is to semiconductorize the photonics. So is that what you've been, you know, sort of discussing, the the way you can build the platform so it's much easier for people to then use? Or or is there other things that you're doing or planning to do to to semiconductorize the the photonic space? Yeah. I think the phrase semiconductorization of photonics really is, is, you know, kinda looking at how photonics was, being used, you know, in the past 20, 30 years. It's most of the assembly has been boutique, you know, and and built built to to not necessarily scale, you know, and and or or with significant scale, not really in mind. Now as you look at these AI technologies, and and requirements, you know, of course, there's a big thrust for more data, which means more optics. And their the volumes are very, very high in semi in this in the world of semiconductors. So, you know, we feel that, you know, for optics to keep up in speed with those kinds of volumes and demand and growth, that optics needs to look like semiconductors. You know, it needs to be built using the same fabs. It needs to be built using the same equipment. It needs to largely leverage on the 1,000,000,000 and 1,000,000,000 of dollars of investment that has gone into the semiconductor space to effectively catch up in terms of scale, volume, and cost and performance. And and so the the term semiconductorization of photonics is really trying to take photonics assembly, not net necessarily the components because those components are very material focused, whether it be indium phosphide or silicon or a thin film that keep Niobate. But largely, the assembly techniques in photonics and kind of, you know, propel them, if you will, to the world of semiconductors so that when we're building these PICs, if you will, they look like chip scale semiconductors. Right? So we're, so when we talk about semiconductorization, we're basically talking about making photonics assembly look like semiconductor chips, which is using chip scale, wafers chip scale, wafer level manufacturing techniques that are what, you know, people use in semiconductors so that, you know, there's some sort of a one to one correlation between semiconductor chips and photonics chips. Right? They're all both being built at wafer scale in in in large silicon fabs. And and so that that whole concept of scale, you know, size, miniaturization, form factor, performance, you know, largely dovetails with what Moore's law has been doing for decades in the world of semiconductors. And so we're just trying to get on to that that trajectory. Right? And that's the term semiconductorization that we've been using at Poet. Okay. And you mentioned, a little while ago about Foxconn as one of the the sort of customers you're working for. So I think is it Foxconn Interconnect Technology to give them their their full name? And they have chosen, I think, your optical engine to use in their optical transceiver model. So, if you're able to I I suspect, obviously, some of it's sort of secret, but what what if you can share some of the the sort of background to to the way you're working with them as a partner. Yeah. You know, I think there's, I mean, there's there's technical, you know, requirements, and and volume requirements around 800 gig moving on to 1.60 and beyond. That's that's clearly one piece of it. There's the geopolitical aspect of needing, you know, module makers, not necessarily all concentrated inside of China. That's another piece of it. Luxshare and Foxconn, you know, our Taiwanese companies, even if they have some operations in China. So, so I think, you know, our our partnerships with some of these companies are largely to provide, you know, a a more expansive set of solutions, to this rapidly growing AI space. So our engagement with Foxconn interconnect is largely to create for them to use our engines to create modules for 800 gig and 1.6 terabit. We're starting out with, 800 gig d r 8, which is one of the solutions that they're excited about, you know, taking to market. But like I said, I think, you know, for for us and them, it's, it's relatively seamless. Once you make a commitment to using a Polar optical engine, the transition from d r 8 to to f r 4 to linear pluggable optics, is, is is relatively more transparent. And that the same or largely the same module capabilities and design can work across all of these different form factors. So it's a it's a while it starts out with this we'll focus on a specific product, which is, an 800 gig d r eight application, we do expect that, you know, it's gonna be, you know, proliferating from there into into multiple form factors and speeds as well. And in terms of AI, which, again, you've referenced a few times and sort of interconnect technology, particularly in the data center space, I mean, there's clearly a massive need at the moment with all these large language models and, you know, the the number crunching going on there. Do you think longer term, there's still gonna be a a massive requirement for AI? Or once the AI applications get out there, will things calm down a bit? Or because there'll be so many of them, and they'll all demand the sort of feeds and speeds we've been talking about that there will still be a, you know, a significant demand for for the the AI, infrastructure, if you like? Yeah. I mean, there are you know, I I think that's a crystal ball question, but, you know, I think look. I you know, as as has happened, you know, recently, you know, once the software associated with, you know, programming for, you know, training and inferencing is available, and now you've got these large language models, You know, the pendulum has kind of switched a little bit to okay. Is my hardware compatible and and capable, to do what we need to do? And and clearly, it's not. Right? And so there's this huge resurgence on, you know, increasing the number of GPU cores. If you look at the latest NVIDIA processor, it's, like, 2 to 3 times larger than the one before it. And and so, yes, there's gonna be this need for more and more compute power or and compute power at power efficiency in terms of, you know, how much power it consumes. And and I don't think that that's fundamentally gonna change. But what what we see happening is that the more data that you're piping through, these GPUs or compute servers or AI servers, there are starting to be bottlenecks. And and, bottlenecks in terms of, you know, memory access, bottlenecks in terms of, CPU access, and so on. So there's this trend towards, you know, how do I reduce these bottlenecks? And and most of the approaches are to to eliminate the latency in data transfer between between chips or between chips and memory and so on. So so while there's this big growth associated with optics in AI, largely for node to node communications, there is, you know, clearly a trend towards, you know, getting closer reach optical communications between chips, chips and memory, disaggregated systems. And and we see that happening. I mean, you know, the the question is not if it's gonna happen, it's when it's gonna happen. Most of these transitions, you know, when they happen, they happen, and the spigot turns on. But till it happens, you know, it's it's it's always the next thing that is gonna happen. Right? So that's typically how transitions, in the industry industry happen. I mean, it happened with analog cell phones going to digital. I mean, back in the day in nineties. Right? It was analog, analog, analog. Boom. This ticket turned was digital. One day, everything was digital. We expect the same thing to happen. So a lot of what we're doing is preparing for that transition to occur where that significant increase in volume is gonna be. And there are a lot of companies in that space now, you know, effectively working on chip to chips and solutions that, you know, are all kind of waiting for that ultimate transition to occur when copper just absolutely runs out of steam and there's a cost effective solution, for photonics to take over. And and we see that, you know, imminently. I mean, it's not next year, but probably not 10 years either. So there's a there's gonna be a transition, and that transition is imminent. And when that transition happens, this need for integration and photonics, the need for large scale, small form factor, low cost photonics components is just gonna be there, and and we see that happening. And so I don't see, you know, things slowing down, actually. You know? I just think that large language models are gonna get larger and larger. They're gonna put more and more demands on the processor. They're gonna put more in demand more in demands on latency. And and, and optics is, is a path to solve these latency issues. And and and I think it's being primed now by companies like Poet and several others to to meet the to meet the requirements, you know, when that when that spigot turns on. Okay. Just before we finish, just a couple of things to also to ask. I think you're also working alongside multilane, if I remember correctly. So I don't know if you can again share just the sort of work you're doing with them at the moment. That'd be good. Yeah. They're a customer collaborator with us. You know, they they've got a a a presence in the in the high speed optical test, and measurements, area. They're, very well reputed. They've got, you know, some module manufacturing capabilities and are keen on, you know, entering into these really high speed 800 gig and beyond module space. And and I said as I said, by using our integrated engines, since we take care of a lot of the optical path design onto the chip itself, you know, their ability to use our engines and create modules is simpler. Lower capex, lower complexity. And and so we're partnering with them to create, you know, a a fully Poet based, 800 gig, transceiver modules. And then, of course, moving on to to 1.60. So they're gonna be, you know, potentially a manufacturing source for us for modules, and and as well, a customer, because they're gonna be, you know, also making and selling modules. So, it's a it's a really great relationship that we're you know, we just started working with them, you know, over the past couple of quarters, and so we would expect the fruits of our labor to show up, sometime next year. Okay. And then maybe just finally I mean, you've alluded to quite a lot of what's going on and maybe into future. But just in terms of the road map, I mean, it sounds to a certain extent as if you've you've been waiting for everyone else to catch up with you, and now that's beginning to happen. So I imagine you'll, you know, try and keep ahead. But is there anything you can share just about whether it's technologies you're looking to develop further or just, again, grow you know, growing the company in terms of your partner base, etcetera? Just, yeah, anything you can share? I think the you know, I mean, it you know, I I guess the way I look at it is, you know, we've got a 100 gigabit per second solutions. We've got some customers. They're designing us in, you know, and if and when they ramp, which they would, I think, soon, you know, we're gonna supply them. That that is not where our focus right now is. Our focus right now is to capitalize on the large opportunities at 800 gig, and and ensure that customers know and understand that our platform is scalable to 3.2 t. It's really important. You know? I mean, it's as a start up, it's it's, it's often difficult. You're chasing near term revenue, but you are a new company. You're a new technology. You've you know? And you have to show that the investment that a customer puts into you for differentiation is is is a long term play, and it's not something that it's a one and done or a one note. So you always have to also be kind of at that leading edge, and that becomes a balance. Because, of course, if you put all our focus on 3.2 t, that revenue is not gonna show up till, you know, much later, but it's also an important flag to plant. Because, you know, if we plant that flag, then people are like, okay. Let's work with them now. And, you know, we know that there's gonna be a solution with them, you know, 4 years or 5 years from now. So, yeah, I think, you know, we we we do have this focus on getting the 800 gig products to the market, but at the same time, we have a a a a an aggressive plan to showcase our technology up at the 3.2 terabit levels over the course of the next year. Okay. It's been brilliant to chat and some some really great insights and information about what's going on at Powhat and and thoughts about the, I guess, the, you know, the photonic market more generally. So Suresh Suresh, really appreciate your time. Thank you very much. Thank you.

 

Share
New Message
Please login to post a reply