If they can make it work, it only theoretically extends Moore’s Law for a few years, and that’s assuming the transistors are not just smaller but that they can also fit more in a given area. That’s been the problem with silicon chips in recent years - as transistors are getting smaller, the heat dissipation issues get worse (they hit theoretical limits of heat dissipation), so even when they halve the size, they can’t fit twice as many in the same space.
The larger problem is that we’re about at the point where, whatever the chips are made of, it’s physically impossible to make it any smaller because of quantum tunneling and the impossibility of making transistors smaller than one atom in width. It’s entirely possible someone will come up with a new way of doing computing involving, say, bouncing photons around so we don’t have the same size limitations, but Moore’s Law is very much about the physicality and practicalities of chip production, so there’s no particular reason to believe that there would even be anything analogous with new computing architectures.
Oh man, consciousness is a whole other, horribly tangled, sticky thicket. Whatever it is, it’s almost certainly not anything like how it’s traditionally been viewed, and our perception of our own is most definitely some sort of illusion/delusion. But yeah, if you’re talking about truly “human equivalent” AI, you’re really dealing with two problems - intelligence and consciousness, both of which are highly problematic to even define.
I can recommend Rita Carter’s book Exploring Consciousness as a primer on all the issues, evidence and history of consciousness.