Panel Discussion of Ecosystem for Transitioning from Lab to Production - Lake Harding Association

Panel Discussion of Ecosystem for Transitioning from Lab to Production

Panel Discussion of Ecosystem for Transitioning from Lab to Production

By Micah Moen 0 Comment October 9, 2019


If I can ask everyone to have a seat. If the speakers can come up to the panel or
panel members? I’m glad everyone is having a good discussion
but we will try to get started before it gets too late. Okay so while everyone is getting set up,
this is a panel discussion and an opportunity for the speakers and audience members here
and if you are viewing on the webcast you can submit questions to try to draw out in
more detail some of the issues brought up by the speakers and the speakers this afternoon
will address about testing and physical sensor issues, either things more detailed or points
in which there was not a single standard answer. That is where we hope the panel discussion
will let us begin to set the stage of major factors we will consider during the scrimmage
tomorrow and include in the report that we hope will provide overview us some idea of
what the next steps and major considerations as we start thinking about manufacturing nanosensors. I will hand to Joe will moderate the panel
and I think he has questions to begin. Thanks to our group here. We have some really good questions to start
out. Question one addresses supply chain and supplies
of nanomaterials. Believe me, this is basic to using any nanomaterials
in a product. Of course the process can be chosen and the
process needs to be repeatable and we work on quality and we work on throughput and we
work on yield for different processes, but if you don’t have a good source of supply
for the nanomaterials my story is I started using helix single wall carbon nanotubes to
make ozone sensors that we finally took to market and last year they stopped answering
the phone. I thought what? We had to qualify another material. That is not easy. We
bought nanotubes from three of four different suppliers and wound up with U.S. nano and
had to buy three batches from them. All three batches came in different. They were different processing and we had
to change the ink formula. A different bulk density so the loading was
different for the same amount of material. So these difficulties plague not only startups
but even a large company who would want to use these materials. The first question has to do for the group
if you could address the supply chain, how to choose and source nanomaterials especially
with respect to integrating them into yours or any fabrication process? Would like to start? I have very similar stories. Not just nanotubes but nanoparticles also. We had a company and I won’t mention company
names but lately we have used zinc selenide particles and the first batch was really nice. Nanoscale 10 nanometers or less. The next batch was 200 nanometers or more
and we are trying to use these particles to build 50 nanometer structures. It doesn’t work. It took them a long time, six months actually,
to get back to the supply we wanted with nanotubes. We had companies having a Scylla conducting
nanotubes but we thought were semiconducting nanotubes but multi-wall and not semiconducting. We fought with them and they would say no. We had to do TM and SEM and build a device
to show them it was not semi-conductor. You run into this all the time. The best production is to have a two-year
spec when you buy it. And if it does not meet the specs which is
also your advertised specs we will return it. It’s better to have an alternate and not just
depend on one source. Like you said overnight they can disappear
and then you have to start over. It’s very expensive to qualify something different. Anyone else? I forgot to introduce the afternoon speakers. My name is Abhishek Motayed, and I am founder
and CEO of N5 sensors and a brief entry to our company. We make microscale gas sensors on a chip like
single-chip sensors. I guess that’s pretty much it. The answer to this question, supply chain
management, it is actually so important and it can almost make or break your company especially
as a startup. If the first time you try to do this you have
to keep in mind that at the end of the day, if you are lucky enough to establish your
own manufacturing process that is great but for most of us it’s incredibly challenging. That means you are looking for a contract
manufacturer, maybe somewhere in the U.S. or somewhere outside and who knows, and then
doing that you have to ensure that the manufacturer you were trying to work with will be okay
handling the materials that you are introducing into your process. For example the product that we manufacture,
inside we put 35 or 40 different processes on top of each other to deliver the final
product but the starting material is a wafer that we get from a vendor in Europe. In between you have chemicals and you have
materials ordered from different companies and vendors, there is a huge excel sheet with
hundreds of line items and you are trying to control every single aspect of that process
line as we call it from wafer specs like RMS thickness and whatnot. At the end of the day you have to understand
you have to take all of that because if you want to scale your company and make money
out of it, which is the whole point for people trying to start a company and build it, you
have to transfer all of that into commercial manufacturing capacity. Most of the time they don’t want to deviate
from what they have been doing for the last 10 years. That has been a significant challenge for
us. We can’t put this material in one of our tools
because this is our main business line. You better know this now rather than a year
later when you have invested $1 million transitioning your manufacturing to the contract manufacturer
and then they say we can’t do that. My advice and I learned the hard way is you
have to talk to all these contract manufacturers and the vendors to figure out what is compatible
down the road because it’s not now or today but you are looking at the future. When you have a product and you are scaling
up, start early and qualify everything that can go into production and make sure it’s
compatible with tool change and you ensure the vendors will be around when you want them
to scale up. It’s challenging. We didn’t introduce the panel so I want them
to introduce themselves and where they are from and pass the microphone. We didn’t introduce the panel, so why don’t
you pass the microphone and let everyone introduce themselves. I am Sameh Dardona and a principal scientist
at United Technologies Research Center Connecticut and we do work in aerospace and building industry
applications. I’m Mei the
assistant professor from Kansas State University and started a company with my postdoc and
we try to the nanoparticles and hope to have particles that can integrate with the sensor
so we try to add to the particle to form an integrated system. Starting the company is the first step. My name is Ahmed Busnaina I’m the director
of the NSF nanoscale science center at Northeastern University. I’m Susan Rose-Pehrsson, and I am the director
of the Navy technology Center for safety and survivability and my expertise is in sensor
arrays and multivariate data analysis and sensor testing. I’m Joe and I’m currently with Spec Sensors
anti-WJ engineering in California. And that is a perfect lead into our next question. Which is about making multiple measurements
which you can make through multiple sensors or using a single sensor and multiple conditions
of measurement to improve the quality of data or the terms of multivariate data analysis
and probably formatted to get more data that is suitable for multivariate analysis which
can pull out information from your data. Also how to overcome variability in measurements
and in sensor stability over time, temperature and other things. The perfect person next to me and I will go
back this way a Susan who can address multivariate’s and sensors. Sensor array data is very difficult as you
know and it dates back to many years ago of how we handle this and design a sensor array. We have been struggling with this for decades. Recent papers have been on the limit of recognition
as opposed to the limit of detection or the limit of quantitation because when you are
dealing with multivariate first you have to have the pattern and the pattern may change
with concentration. The other issue is I have always believed
that it is a good idea to put multimodal sensors together so the information you are getting
is varied and you are measuring different parameters about that chemical and not just
similar parameters as you would have if you did an array of just one type of sensor. That introduces its own challenges of how
do you data fuse information that is coming from multiple sources because many times they
have different response times and varying parameters? Joe and I had many experiences together years
ago when we were looking at electrochemical sensors and some sensors would dominate the
entire pattern. We had to do pattern normalization before
we could even identify the chemical to move forward. Some research going on in my group at the
Naval research laboratory is in design theory. We are trying to take information theory and
apply it to sensors. Some of the first information coming out of
that is more is not always better. You can put a lot of sensors together and
in some cases all you do is increase the level of false alarms. You are not actually getting additional recognition. You have to do this in a very intelligent
way, and many times your sensor array is no better than the weakest sensor in the array
depending on how you do your algorithm development. So as you increase the number of sensors,
this can be a very complex approach to sensors and the whole field of statistics and so forth
has to be expanded from the univariate measurements to the multivariate measurements. I want to say one thing that Susan taught
me in the mid-1980s, when I was doing this work I was a scientist and she was working
with Peter at Penn State doing at that time very new stuff to do multivariate’s and pattern
recognition and component analysis on these kind of things for a data set. And the one thing that struck me as important
which I have never forgotten was in order to do pattern recognition you can get that
from any group of sensors or sensory signals by operating them differently or whatever
but if you do that for a series of chemicals in a certain matrix, that is your data set. If you then take an unknown and you do it
in the atmosphere in a different matrix, that pattern is not a member of that data set mathematically
and it is incorrect in terms of rigorous mathematics to assume to interpret that pattern with that
principal component analysis. You either have to do principal component
analysis or multivariates for every conceivable data set in every conceivable environment,
or you have to be stuck with some imperfect analysis. Having said that, all analyses are chemical
analyses and are in — are imperfect or even chemical absorption and they have the finest
of lines, you can do a nickel analysis and you can have one line and pull out of the
spectrum but if there is an interference that produces that line, you then will get an in
corrected — you will get an incorrect validity and an and valid measurement even from an
atomic absorption measure. The question is how probable is that? The answer is very improbable. That is a pretty specific line in the world
of chemistry which is extremely diverse. So constructing the multivariate data set
to be probabilistic of any application data set is I think the biggest challenge that
people have. The second challenges if you have multidimensional
sensors, a heterogeneous array is better than a homogenous one typically for information
content, but typically if you have all the sensors and they all have drift in dimensions,
how do you correct for this drift and compensate or calibrate? The people that say AI is the answer may be
right, but I don’t think it will be simple. There is another application that nanotechnology
introduces to this whole field of sensing and that is the ability to go from zero order
measurements to first-order and then second-order. By the time you get to third order measurements,
then you can introduce that unknown and you don’t have to have all the testing that is
required if you are working in zero order or first order. Examples of that in the laboratory are things
like a GC because now you have resolving power of the GC in addition to the mass spectrometer
that is already giving you multi-information and then you go in and say or GC
mass spectrometer and that is an example of a third order instrument. We can start to do that now with nanotechnology
because we can put one type of separation device that may be very fast and very small
in front of another type of sensor. We can start to think about developing field
applications of third order sensory devices that can handle real-world unknowns. The only thing I will say about that is everybody
wants sensors that do of something. Companies want that and everyone you deal
with actually want the sensors and Inc. it’s very easy then you just plug them in and they
work. The one thing that we deal with, and you talked
about it a little bit, is that sensor it Spec Sensor to sensor interaction requires
low voltage and one requires higher voltage and some are electrochemical and so forth
and if you put all those in the same array you have noise or maybe thermal effect and
those can lead to a lot of errors. When we talk about electro-diversity and biology? That’s actually my interest as far as biosensors
and I totally agree with the multidimensional integration. For the sensors we are talking about now it’s
not just the sensor itself. It is integrating the sensor device. It’s not just device fabrication combined
with material surface chemistry and meanwhile we see electronic integration and readout
so it’s a whole system. So we can integrate the processing combine
the material sensing part because at some point integration can give you a much more
pure sample and especially for the biological sample and using biosensors we always want
to integrate the sample preparation with the sensing part. So this can
purify the biological samples, usually the really complex matrix like water or the food
and they are really compact metrics. We integrate at some point in the process
to purify the target we want to sense. In that case we have much better improved
specificity and sensitivity that can solve the problem for the readout drifting and also
the problem as well. So Sameh Dardona from United Technologies
industry maybe you could start us off on our third question which is very important in
integrating these real-world activities. It says can we detail what are some of the
needed standards and materials, processes, and terminologies that would be useful for
integrating ? My small experience with United Technologies as they have a group there that
is interested in sensorsfor environment and integrating them into architecture so it’s
part of the infrastructure and the architecture along with carrier conditioning the air and
things like that. How do we interface some of the new nanotechnology
materials and processes and terminology? What standards would help get us to products
faster? Very good. I would like to say we have two applications
we need to see. One of them is an aerospace application and
the other is the building management application. In the building management industry what we
deliver to conventional buildings are smoke alarms and flame and fire detectors. A lot of what we do right now is based on
ionization, we ionize the air and med are a current at a baseline. For example we have smoke and the smoke will
reduce the amount of current detected and then you know there is smoke. So what would work for us, for example
all kinds of smoke, so maybe you have smoke coming from a material like wood or plastic
and you have different particle sizes. Right now we are looking at nanotechnology
for a way to enable small force alarms and smoke detection. We do that by looking into materials that
could be printed or made to conform with flexible substrates and used as sources. The standard we look for is more like if I
want to have nano ionizers and have material that can create ions or electrons as a source
for my ionization I want to have some requirement on the material in terms of lifetime, in terms
of amount of currents that can be handled without degradation from the amount of current
passing into it, I want to have requirements on the severity of this nanomaterial in conditions. So is there oxygen that could surround the
material during the operation and would it destroy the material or would it still function
as it should? By the way, what you are saying in the audience
should recognize is very typical of industrial needs. He didn’t say a lot about the sensitivity
and is assuming it will do that but said the lifetime and the reliability. Without that you cannot market a product. In terms of nanomaterials, our center has
been exploring nanomaterials for those applications and as Joe said one measure barrier is the
lifetime. Nanomaterials are small and they have different
properties in terms of heating effect and melting effect and so on. A lot of times we drive voltage and current
to them and look at them after a few weeks of testing and fine chemistry being activated
within the material because of impurities which are part of the formulations. The quality of the samples are critical. The lifetime operation is critical for application
and being able to deploy it and meet requirements of 10 years of operation. Right now we have smoke alarms that can live
10 years in your house based on radioactive materials. We are looking into green to replace those
radioactive material with nano-based ionizers and the currents would be the same. The requirement would be 10 years of lifetime
and that’s where we need improvement in the nanomaterial formulations and the purity of
the samples. So we have time for one last comment if someone
wants to summarize the panel or say something unique. Anybody? I want to follow up on the question about
standards. It came up and some of the talks and you have
particular things that UTC wants to see and is this something for sensor developers where
you have technology in a lab to what extent do you think most developers early on actually
know the standards that their material needs to make extra they understand the conditions
which it will be used and is that a big failing point do you think Rex when you talk about
research and licensing plus and licensing the technology is this an issue that relates
to testing and people don’t understand the environment which the sensor will be used
so they have to test? Maybe if you could talk about developing in
the lab and a people try to license in a lab, is that a big issue? The purpose of the panels to educate early-stage
developers. I would appreciate comments people have on
that. Okay. Of course. You have to explore and invent in the lab,
but then of course when you get to the practical side you have to start considering other things. Is best to be aware of everything. Maybe the panel has a good comment. I will mention one example. Basically like what Sam has said they assume
as well as Joe said you will deliver but they want the lifetime and they want reliability. For example, when we did an Air Force project
they gave us two pages of specs including the lifetime and the sensitivity and the power
requirement and all of that. Sometimes you have a list. Know whether you can meet the list or not
is a different story. Companies don’t give you that specific list
usually. They assume you know what you are doing. They only care about the big stuff but then
when you deliver something that doesn’t meet their specifications they will let you know. So having specifications like that is very
important and helpful. It helps researchers try to meet that and
if they are lacking in one of these specifications, these could be two page along, and then they
try to him modify Mac try to modify and one of the specs was not even the spec I talked
about. Signal strength was not on the sheet. I don’t think they got to that point yet. Initially our sensor was only pico lamps and
you have to have room to get that signal. We couldn’t do it and I said it’s ridiculous
and you cannot have that. You have to redesign the sensor completely. And then they got to 10 milliamps and I said
that’s not good enough. You have to go to milliamps. And then they got to that point so having
specifications from companies or government agencies is very useful I think. What made you feel you had them to be very
useful in developing biosensor do you feel you have that and how can you get that? To be honest first point of entry would be
to gather information. From the laboratory we have experiment discovery
to validate SEM to EM and lab standards to validate the material. Once we feel it’s validated from our standpoint
we move forward and that would be really useful information where we can align our protocol
with the standout protocol. I think another area that is frequently overlooked
by sensor developers is sampling. I think sampling can make or break the performance
of a sensor. Indeed. [Laughter] Indeed. Any questions from the group before we break
for lunch? We did put off questions from before as well
so any questions from before or final comments ? I heard comments about multivariate analysis
and trying to use that in a sense to capture the formulated properties of the sensor itself. But you are limited to the fact that your
multivariate analysis and structure that a balsam that is based upon the data you have
and you don’t have any confidence necessarily and you extract that to other material types
unless you have appropriate calibration data. How do you propose to overcome the? If you apply a machine learning or deep learning,
the system does the prediction classifiers for you and you don’t know if they they are
right or not and you don’t know how they articulate into others so how do you propose or see in
approaches to try to address the issues? Given you have a formulated materials wanting
to look at the raw material properties themselves but then the formulation itself and douse
it with a different set of properties sometimes. What is the view of how you present that? Before I defer to Susan for this, I will give
you one example of a very successful multivariate analysis. It was a degradation of olive oil. The experiment was done and took a bunch of
olive oil, different grades from Italy and from Greece and measured the degradation with
a mass spectrometer and found the molecular basis of this was formation of [Indiscernible]. Then they took all of oils and spite them
with [Indiscernible] and did multivariate analysis and chemical sensing sniffing this
and [Indiscernible] spiked sample with different concentration to create a calibration curve. They went back and calibrated that and took
all of oils and let them spoil me fell exactly in the multivariate calibration curve. They understood the molecular basis and understood
the matrix and the samples were from the same matrix and virtually the same matrix although
there is biodiversity from year to year in olive oils, it was sufficient to show this
multivariate analysis technique worked for spoiling of olive oil. That is a good model for an example of a successful
one. You can see it is quite limited when you think
about that in terms of ambient air or breath. For example with thousands of chemicals and
Susan, maybe you have a more detailed comment. That is a perfect example of where electronic
noses have been successful. Those are very controlled experiments where
we can control the environment in which the sensor is working. That is a closed environment. Where sensor arrays frequently run into problems
is in the open world environment. That is where you can’t predict every interferon
they may run across and we don’t have enough money in the world to test for every variation
that they may encounter. That is where we need to rely on nanotechnology
to help us get beyond zero or first-order sensors and move into second and third order
so we have that more robust approach. That is one approach. The other is sampling. Many times sampling can limit the environment
in which your sensor has to work. That is a very powerful tool as well. A prime example of that is the IMS. IMS is an instrument that can be used in ambient
air. It doesn’t like water. Many of the chemicals that it is trying to
detect when a clusters with water gives a different retention time so you miss the detection
of it. If you can limit the sampling such that you
can reduce the water and they do that by circulating dry air within the instrument, then you end
up with a very powerful tool. Those are two approaches and also the reason
why we are looking at this design theory as a way of overcoming some of these open world
problems. Thank you. One more? I would like to mention from our experience,
one way to address the issue of multivariant is to use multiple technologies. Basically when you think about a sensor, sometimes
you think about one technology or one way of doing something. Think about a flame. If you want to take blame or fire in her room,
the flame has signatures that are visible in the IR and UV bands and if you stick with
one technology detector you can design one. But that UV detector could have a response
to sun radiation and then you think you can measure fire when you are measuring sunlight. Therefore, one way to do it is to integrate
another technology on the same platform. Which means you can add IR detector and then
you can go even further and add different technology to the same platform. A platform that can integrate more than one
sensing technology, but think about using optical in addition to semiconductor or or
radar technologies on the same platform, and you have different ways of measuring the same
thing. You have validation from multiple detection
mechanisms and that improves false alarms and makes your component more robust for the
application. We do that a lot in aerospace and in the building
management technologies were reintegrate more than one technology for the same sensing principle. More questions? You probably won’t be able to answer this
but I’ll ask anyway. I’m Angela and I’m a program manager at DHS
science and technology. A lot of our customers which are DHS components
like Coast Guard, TSA etc. don’t really know what they
have as far as requirements or needs so they come to us and someone mentioned wanting everything
for no cost and no false alarms. We get a list that looks just like that. Throughout the talks I heard things about
sensitivity retained — retention, repeatability, if you had to prioritize those characteristics,
would you be be able to do that quick how would you do with? What should we think about, we being program
managers deciding how to spend dollars to fulfill needs of customers, how do we think
about these sensors and how to prioritize so we can get some kind of product out in
the field? I will ask the panel to chime in on this,
but from my point of view I would like to see agencies write performance specifications
rather than characteristics of a sensor. For example, I am a fireman and want to run
into a building and need to know whether I need my self-contained breathing suit. A requirement for an activity, like I need
to inspect trading card derailments and chlorine — train car derailments and chlorine could
be airborne. Or I carry a rifle and a gun and a scuba system
so I would rather see agencies write those kinds of user scenarios, and then ask the
community to say let’s define some sensors that approach that could maybe in some way
combine and get a useful tool to that person so their health is protected or an asset is
protected or something like that. In my experience, I believe sensor to sensor
reproducibility is very crucial because you can’t afford to do all the testing that needs
to be done on every single sensor that is produced. You have to be able to do the testing and
then hope that calibration transfer moves to every sensor in that batch so that all
you were doing at that point is validation testing. I think Joe said what I was going to say basically. It has to take the user and the sensor maker,
if the user doesn’t know then they have to get together and decide what are the specifications
because only the user will know what the needs are. The sensor designer or manufacturer will know
what the limitations are. They can reach some compromise. We run into a lot of people that ask for a
sensor that we cannot make. They may actually have some kind of flexibility
in terms of knowing while I really would want 500 degrees Celsius but I can live with 300. We have something then. But like TSA or a situation with fire or chemical
spills or so forth, only the people that actually deal with this will know what they want or
need. The user can say the sensor I can give you
will last for six months. They say that’s not good enough and I wanted
for five years. I say maybe we can do five years but we don’t
know yet. For the Air Force with our project the new
and gave us a very long specification, but in our development we found other specifications
that were not there because they didn’t know because I didn’t have any sensors at that
time. We are funding new types of sensors. So Joe got it right on the money because it’s
very important. So science and technology is a human endeavor
and it’s very evolutionary. To have the user define on a regular basis
what their needs are in terms of fit, form and function, and then to have sensor people
describe what they can produce, so maybe 300 degrees Celsius fits 90% of the market and
you could do a lot of good it may still want to do that. Even though when you get the requirement from
the user he says he needs 500 which covers all of his situations. But maybe 300 covers 90% and that would be
a tremendous advance. So you get together and decide what is critical. On the other side, United Technologies knows
they don’t have a 10 year sensor end are not competitive in the market and it will never
go anywhere. That is a dropdead requirement. So you have dropdead requirements, you have
nice to have, you have want to have, and that dialogue must go on between the user and the
sensor developer. I always try not to try hard and fast lines. A different way to think about that question
is we can start with the problem identification. You don’t know how to connect the user with
the sensors you have but you can start with the problem identification. That is a problem in the real world and we
need to solve this problem especially. That can bring you to specific user developer
together to think about a solution to solve the problem. That can be more efficient to facilitate this
medication and develop a sensor that can be used in the real world. The comment from our experience as a small
company trying to develop a product, reflecting on Joe’s comment and everyone said the same
thing. How do you develop what is important to have
and what is needed is basically Every sensor application and customer is different in terms
of needs and requirements. One would be 60 degrees and another person,
it’s 90 degrees and not nearly enough. The first step is sitting down together face
to face and discussing what is needed, and then what we can do, and the other thing is
eventually at the end of the day as a manufacturer you have to make a decision whether you actually
want to pursue down that business line to address those markets. Sometimes specifications are very market oriented. Like automotive sensors have very different
specifications versus smart home sensors. If you are trying to address a wearable market
like we saw on the slide, packaging requirements are out of this world. You don’t use your cell phone for about 10
years. You will be lucky if you use it for one year. Lifetime is important but guess what is more
important? The packaging size, form factor. You sit down with those people that make smart
phones, when they talk about the size and the cost, it’s very different ballgame versus
a thermostat or a smoke alarm. You know it’s 10 years lifetime. It’s very different ballgame every application
in every market. It’s better to understand which market you
will focus on as a sensor manufacturer and then see if you can address that exactly by
what is needed by your end-user customers and it starts with dialogue. You have to reiterate back and forth for that
for you while. I would repeat what most have said. The key success for the document to be released
to earliest communication and being involved with the end-user early on. If you want to be successful and align with
their need and meet the requirements, I suggest you try to know who your target application
is and you understand was a player in that domain or space. Talk to them and understand the requirements
as much as you can and then you can make your material, or design, or sensor aligned with
the application. And the requirement is an application document
so when that document is created you have been in operation requirements that come from
the operation environment itself. How long it will last stand and the amount
of pressure and vibration and so on. There is also the voice of the customer wish
list. You have a customer who will tell you I have
a sensor that can meet all the requirements and does everything I need but I needed to
do this thing as well or additionally measure this thing. Or I need to lifetime longer or it needs to
be seamless in my component and not visible and doesn’t obstruct airflow. There are things that are must-have and things
called desired to have in your system. Understanding both of them is critical for
your engagement and success to be a supplier for your end-user. Additionally on the materials, materials are
the key element in this technology. Every sensor has a material in it and understanding
what your application because materials are degradable and can change over time and respond
to the application. Again back to the human element for me, and
an honest conversation between the user and developer and the people in between on what
is a dropdead specification, a must-have, and what is a want to have. In my experience it is also a timeline. If you ask me is it possible to develop nanosensors
in AI sufficiently to do a lot of multivariate sensing, nanotube sensing systems have the
whole integrated package, I would say to you yes. Many people are developing different parts
of this all the time. The technology is moving quite rapidly. But if you want this in a package for the
field tomorrow, if I told you I could do this I would probably be overstating it and I don’t
want to use the word fib but I would be overstating. It would be super high risk if you want to
think of it that way. If you absolutely need it today and need to
make an impact or maybe you have to make compromises. Maybe the package is bigger or the power level
is higher or you need more sampling or more macroscopic things built around the nanosensor
of today. So then you look at do I go with a cost that
is undesirable in a size that is undesirable but I get a quick fix or do I hold out for
everything X we have a paradigm — for everything? We have a paradigm that says that kind of
world many times. I want to go out to one more question. I just had a short story. This is actually the sensor I showed in my
presentation. The chemical sensor. This consortium that funds these projects
had a workshop to define what they want. I couldn’t go so I sent one of the young professors
and he came back and said there is no hope and we will not submit anything. He said they wanted a sensor that can detect
chemicals and they wanted the size to be 50 and 70 nanometers. He said that’s impossible. I said we will submit and he said no it cannot
be made and I said I know it cannot be made. But these guys don’t know what they are talking
about. [Laughter] So we submitted a proposal and
propose something between 100 microns and 500 microns and I think our proposal had the
smallest size proposed. So that is a case where the user wants something
but they don’t really know exactly what they need. The reason they wanted 50 to 70 nanometers
was they wanted it to go in box to test things. Maybe stories are written in the naked city. Any more questions? I think there was one more in the audience. I have one more. We talked about nanotechnology and nanomaterials
and integrating them into sensors and then integrating the sensors into electronics and
then packages and instrumentation or providing information. The question to the panel is to self-contained
devices provide greater design freedom or greater paths to market than just nanomaterials
and sensors? We can go right down the line. This is a really good question and opportunity
from N5 to give you background, what we are trying to do is take the chemical censorship
which is an analog product and sell that as a product itself and combine that with something
called ASIC which is an application specific integrated circuit and some of the complements
and put it in a small package with a filter membrane that would selectively remove some
complements and particulate. So the size of three by three millimeter and
one millimeter thick. The reason for that is that is the wearables
size form factor requirement. The reason you are trying to do that is a
self-contained sensor is where everyone is going. If you are trying to address the IOT or wearables,
industrial IRTs or smart phones, this is what you want. In this day and age, software still makes
the products profitable, and at the end of the day everyone wants data. No one really cares about sensors. Like you can look is a black box but as long
as it provides data and people like application developers on top of the cloud computing,
you add on top of that and it gives you actionable intelligence that you can do this and add
on top of that to make services and products and things like that. Putting it all together is a significant challenge,
although this is probably the easiest way to get to the market and grow your company
rapidly. You have many examples like that from companies
like SensoryOne that produces industrial standard temperature and humidity sensors in a single
package all-digital project. You drop it in any board and it provides a
solution and anybody with maker type education and used to tinkering with boards can integrated
into the product. That is why you see in the crowdfunding campaign
more and more hardware-based products coming in. If you look at sensors this is where they
are moving to, all the solutions are a single package. The problem is developing the kind of sensor
is extremely challenging and time-consuming and resource intensive for a startup like
our company. You have to not only develop the sensor but
you have to pay for the AC development and the packaging development and line up every
single thing. It goes both ways. It’s easiest way to market and probably what
the world needs right now but not the easiest to produce and manufacture large-scale. But the main point is if you have the whole
package and useful data, and nobody cares about the interaction of the sensor and the
signal from the sensor, but temperature, humidity, compensated output, then you can go to all
the engineers making products. It is expensive and a generic ASIC doesn’t
exist for all sensors and all sensor types. We have a couple that are coming out for chemistry
like TI have one now and coming up with the next generation with temperature humidity
on it. And DDI analog devices coming out with one. People are starting to move forward but until
the investment is made like the investment in semiconductor devices like microprocessors,
and you see from the 1980s the first RCA 1802 we used on sensor arrays that had 2K of memory
and no I/O at all to what you can get today for $2 in terms of microprocessors with I/O
and all kinds of features. I think sensors have to be evolutionary and
we have to have platforms that develop that also support the sensor infrastructure. In order to connect to the marketplace to
engineers who are doing devices, like garage door openers and controls for appliances and
all kinds of interesting machines like that. Other comments? I would like to comment on the question. I would say whether it’s a sustained or integrated
sensor is an application driven decision. I would like to point out that for aerospace
technology and aerospace components that are high-end and expensive, some could be as high
as $25,000 all the way to $2.5 million at a component. What we look at in terms of sensing is to
integrate sensors within the component. So those kind of applications requires the
sensor to be embedded and seamless in existence that are causing no harm to your structure
design. The technology for this is printing and nano
combined together and you marry nanomaterials with printing technologies. This gives you an option to really print things
you can see with your eyes. That is inside a composite that could be a
strain gauge or it could be RCD’s it could be an antenna This is part of manufacturing
now and not something that can stand alone or can be sustained like a circuit. It’s more like you need to print that design
at that specific location with the minimal amount of material or mass added without harm
to your design. Being able to wire or wirelessly integrate
it in the component. That application requires a focus on the integration
and what kind of manufacturing is needed or available that can enable me to print a strain
gauge that is 10 microns per 100 microns in size for example. That it gives you a way to measure the loader
damage that can happen in the rotation of the fan blade. For me one of the major things is to have
this dialogue and maybe there can be some instigation of these dialogues by NSF manufacturing
because I think you can play an important role bringing industry and sensor developers
together. This whole thing is really part of a very
important dialogue that you have to have. You can spend lots of time and money developing
a specification or a manufacturing process that is totally irrelevant to a vertical. Every vertical, and in automotive they are
virtually impossible to deal with for small companies. They need two supplier manufacturers and assurances
and certifications up the was you. It’s crazy. For a small guy these are difficult. So it takes partnerships from market verticals
to have a discussion around those with sensor manufacturers to integrate the skill sets
that are needed to get the nanotechnology to manufacturing and to verticals. They are all different, whether you want aerospace
or automotive or industrial or biomedical, they are all very different. Some of the major topics in the major market
areas could be instigated by NSF. I must say in my experience with NSF, it was
one of the few agencies that followed the technology and let the group develop. We had funding for about four years from NSF
and now we have a standalone company. They did that with a lot of the centers. They had initiated funding and then the centers
would have five years and then standalone. This has proven to be a really successful
way to fund technology. Agencies that have unsecured funding and it
changes from year to dear, it is not necessarily the problem of the program managers because
they are in a very volatile environment, but they have a much harder time getting some
of this mission science done. I think it would be better if they had corresponded
efforts — if they had clearer — core elements. Relatively few of us here are probably interested
in this but my question would be to each of you is there a particular federal program
or agency that was really helpful to you and is there a particular federal program or agency
that got in the way? [Laughter] I’m interested in the good and the bad. Not long diatribes about paying taxes or something
like that. Plat You can start. I think the program I would like to point
to is the NextFlex Institute in California San Jose funded by DOD. The reason we like it is because it’s membership
based and it’s roadmap driven and it’s created and made
by the members themselves based on their needs and vision for their technologies. The second thing is there proposals need to
have team members from different organizations which requires academic staff as well as industry
staff. They also specify the TRL level for the proposal. They have a clear description of where in
the TRL map they want to make the investment and what technology they want to move forward
to the commercial line. Additionally, they provide feedback for proposal
funding and provide details and they’re very engaging as an Institute. For N5 SBIR has been great support like Angela’s
program and from NASA also. One program, it’s not really a program but
in terms of infrastructure of course like NIST the nano Fab and I think NIST was one
of the facilities that had this architecture which allows very easy access from small businesses
to come in with our team of engineers and start building our own manufacturing line
and R&D line inside the federal facility and of course paying for it. After that ARL had an open campus. So opening up the state-of-the-art facilities
and of course taxpayer dollars building it up but actually fostering an ecosystem around
this local area were small businesses come in and develop a product and getting to manufacturing
through contract manufacturers. It helps more small businesses because you
see a model at play and done successfully. We have developed a couple of products working
at NIST and that has been a great help for N5. We see more and more federal organizations
opening up, not only manufacturing but also characterization and that is one thing that
small businesses cannot afford, and especially materials characterization. That’s really helpful. From the standpoint of small business at the
University NSF I-corps is helpful and have to bridge the gaps between the academia and
the industry and help the faculty to identify the partnership and the problem and reframe
the product development. That program actually provide lots of pertinent
questions to think about the product development and you can hear different perspectives that
helps you to develop the product. Also funding from the USDA nanotechnology
program as well and that was pretty helpful to drive innovation to try to have innovative
technology applied to particular real-world problems. We had a very good experience with NSF. They funded our center and like Joe said,
the long-term funding is very important because we would not have been able to develop this
technology had we had a two-year contract. That was actually very essential. We have not had, I’m glad Sam had a good experience
with NextFlex the concept is very good but the way that and many of these institutes
operate is the moment you pay, the more say you have on the topic and the more money you
pay the more say you have in which projects get funded. If you are University and you can’t get the
money to pay the high fee then you don’t have much say in what gets decided. The concept is excellent actually. I just have a little bit of trouble with their
fee-based system. There are other institutes that do not require
as much membership fee and there are some that require more. That’s the only trouble I had. But the concept is very good. It’s actually more like the Institute. I am in the unique position here since I am
government but we do get our funding from other government agencies. Sustained funding is key. My two best sponsors over my career have been
NASA and DHS. Those are primarily because they have a long-term
view that allowed us time to develop the technology and move things forward. I think government programs like government
industry partnerships, government university partnerships and national laboratory partnerships,
all of those are part of the technical infrastructure of our country is bringing things forward. I will tell you one other thing, the small
business administration and SBIR, SBIR is the technical part of SBA. Small business has one program and one that
I used was score, service Corps of retired executives. My first company I had an SBIR and I sold
some instruments to Finland. And I said how do I do this? Import, export, customs, invoicing? And the service Corps retired executives at
SBA and I called them up and said I need a guy who sold stuff to Finland. And they found a guy who was an ex-executive
from Motorola and said if you need a freight forwarder, he does all the work and you need
a bank and a letter of credit and they do all that work and it’s in and out with no
problem. It was amazingly helpful and interesting at
the same time. There are a lot of programs within SBA that
are helpful to technology companies in the SBIR for startups and innovation is tremendous. As you get closer to products, the industry
government relationships are more important. For innovation I think the government university
relationships are much more important. And the national lab system is absolutely
a jam in this country and needs to be cherished and resourced more than it is for the mission
that we have two fix big problems like our health and climate change and all the sorts
of things. And I include the Naval research lab in with
the national labs and they are tremendous resources for us. Following up from that question what is your
experience with the U.S. patent office? They have a small entity filing status that
is a little less egregious but the patent system is still something to navigate unless
you have experience. We filed more than 100 patents and on average
they take about four years or five years. Even if it’s completely new, the newest will
take three years and if there is nothing like it for example it will go fast. But if there is something similar or even
in a keyword search similar, it will take five years at least. We really need to break for lunch. Arlington is busy. So what I want to point out to everybody and
I mentioned it before but this map here there are a lot of options around Arlington so you
should be spoiled for choice is to eat. If we can get back here I think it’s that
one 40 PM we will meet back so let’s do it on time so we can end the day on time. That was really a great panel and thank you
so much. [Applause] I thought that was really terrific. Thank you.

Add Comment

Your email address will not be published. Required fields are marked *