Sell stuff that works, keep prices fixed and keep promises.
I think I speak for the rest of the global defense industry in stating that this practice must be stamped out. It will be the death of us all.
The real problem is that against 5th generation aircraft carrying a Meteor-like weapon, a 4th gen aircraft will be killed before it can locate the aggressor, regardless of the AAM it carries.
Thus saith The Book of Fifth Generation…
It may be time to start ignoring Lukos, since Mods don’t seem to be getting a clue.
Both you and FalconDude have displayed disgraceful conduct
Awww, call the ****ing whaaaaaambulance. Or Dr Fedaykin.
Look, sunshine, just because a technique is applicable to astronomy, where I can take a year to gather data if I want, does not mean anything remotely similar can be done in real time in a situation involving lethal weapons. So demonstrate that link/possibility or henceforth hold your peace, capisc’?
And for what it’s worth, the Fighter Mafia – including Sprey, their point man in Washington, and I can tell you exactly how far you get without one of those – accomplished great things, and I’d rather lick their salt (whatever the **** that means – is English your first language?) than earn the respect of an ignoramus ITG.
And isn’t it way past your bedtime anyway?
MC, right you are, diffraction. Pre-first-coffee post. However, I think you get what I was trying to tell Lukos. You’re seeing flare and aberration in that image, probably affected by video-to-still conversion, however that was done. It looks normal because flare and aberration are most apparent with bright light sources, and a rocket in IR is… petty damn bright.
However – Contax/Zeiss… I am obligated to burn you at the stake as a lifetime Leica shooter.
Lukos – Please describe in detail all the steps from the raw EO data to that image. Also, explain the visible onion shape from what should be a rocket plume in near vacuum. Make sure your explanation excludes the possibility of lens flare and other artifacts.
I’ll willingly tell AI that they’re full of it, when that’s the case. But the use of processing to reduce FARs and thereby – to some extent – extend range (because you can dial up the gain without getting swamped by noise) is routine, and IRSTs do it as well.
It isn’t a camera because it has to detect things at every distance using just one lens setting.
Oh right. Because as we all know when you focus your camera everything else in the picture disappears. Happens all the time.
I’m not sure what he means. As I pointed out, that’s what a lens does – modify the trajectory of each photon via diffraction so that it hits the focal plane at one defined point, and so that all the photons hit the focal plane in the same az-el relationship as their source.
Really, the Mods should eject him. f-16.net would love the guy.
LukeSkyToddler – Yes, way off. How would light from a distant object only hit one element?
Because that’s what a lens is for, perhaps? Can you explain why focal plane arrays are… well, called focal plane arrays? Or how we took photos with film or glass plates and no fancy astronomers’ algorithms? Because, if you can’t, your comments in the last three posts are verbal diarrhea, and pretty thin stuff at that.
Basically, you’re the distillation of the entire JSF fan community. You simply can’t accept that your pet airplane is subject to the normal trades of engineering design, or even the laws of physics.
FD – You have it right. Fundamentally an imaging MAWS, targeting pod and IRST work in much the same way, except that the TDP and IRST have a steerable mirror in front of the lens. The big difference is the FOV (which is inversely related to focal length – long focal length = narrow FOV).
MAWS or EODAS has a wide angle lens. The TDP is more like a telephoto, with a mirror that points and stabilizes. The IRST is a high-power telescope, but with a video-rate scanning mirror.
To the best of my knowledge it is megapixel-class (1024 x 1024 or thereabouts). Pitch (the spacing of detectors) is crucial to IR focal plane arrays and defines the relationship of array size to detector count. As pitches get close to wavelength, which it is in midwave IR, things get problematical, so you just don’t get the density that is possible in visible light, hence you don’t see the increasing pixel counts that you see in cameras. Here’s a recent update from that battlefront:
http://www.sofradir.com/sofradir-unveils-daphnis-line-10%CE%BCm-pitch-infrared-detectors/
The max resolution of any imaging device is defined by its field of view and the resolution of its detector (whether in pixels – sorry, but everyone uses that term for FPAs – or the grain of wet film). Each element in an IR detector gives a single shade-of-gray signal and you can’t make the picture sharper than that, at least not in real time or close to it, any more than you could apply magic in the old days and make Ektachrome 400 look like Kodachrome 25. That’s why we adjust the FOV with optical zooms and telephotos – to get detail of distant subjects.
The EODAS sensors are not dissimilar to a cellphone camera in FOV, so you can do the math from there.
Further evidence, by the way, is the fact that EODAS was never considered adequate for pilotage, hence the incorporation of the low-light camera (EBAPS) in the helmet.
So EO-DAS refines its accuracy by using an algorithm developed by Chinese astronomers in 2013. In real time. Forgive me if I file this between “Not proven” and “complete bull****”.
You also don’t appear to have a clue about the fundamentals of DIRCM. The wide-angle, physically fixed MAWS cues the NFOV tracker, which is boresighted to the laser in the steerable jam head.
When you display a lack of knowledge about things that are simple and unclassified, it’s hard to take you seriously.
http://www.photonics.com/Article.aspx?AID=16652
Please don’t say, “that’s from 2001” without showing where and when and how it was upgraded.
Integrated versus federated systems were a BFD in the 1990s. My old scanner and printer that were run from dedicated cards in my CPU were integrated because you could not put a computer and memory in the peripherals. My phone, camera, laptop, iPad, desktop and all-in-one are federated. As long as my WiFi is working…
Pave Pillar/INEWS/ICNIA were the only way in the 1980s to do automated EMCON and drive a single tactical display. That became the F-22 spec and was default for the F-35.
Today, you can process at the sensor and as long as everyone is speaking the same language, achieve fusion (which in its most basic form is tying the sensed and offboard data to single combined targets) and create the common picture with an Ethernet link.
Indeed, when you start getting into new things like cognitive EW, you may want processing right behind the aperture because the latency requirements are demanding.
Megapixel? The then-supplier (Cincinnati Electronics, now L-3) blabbed that years ago. The problem is the element size, that was constant for a long time – this is not a visual-light device. There are smaller elements (more pixels for same size) in the labs.
Define “many”. It does not mean “one”.
On HMDs &c. As the Intevac EBAPS technology, and possibly equivalents, come on line, and with the optical waveguide technology used on the Thales Scorpion and BAE Q-series, much of the EODAS-plus-HMDS capability will be commoditized. Symbology will be delivered directly from sensors on the helmet, with advanced MAWS supplying targeting data and symbology only being fed from the aircraft. By 2020 everyone will have it.