Viltorx released firmware 1.21 for the VILTROX 85mm f/1.8 V1.
Firmware update instructions: 1. Optimize the focus speed, and solve the problem that the AFC focus speed is too slow when the shutter is pressed halfway to switch the focus.
2. Solve some known bugs
Sony received yet another Gold Award from DPR in their recent review of the unique Sony a7SIII, which you can read here. Above and below are their findings, but it looks like the Sony a7SIII is yet another great camera from Sony.
I’ve primarily used the a7S III as a video camera. It’s clear both that video is where the bulk of Sony’s effort has gone and that there’s not a sufficient low-light benefit to make it worth spending this much money on a 12MP camera when less expensive models will produce more detailed images (even when downscaled to 12MP).
What we like
What we don’t
Excellent full-frame 4K footage at up to 60p
Only slight crop to 4K/120 mode
Excellent oversampled 1080 footage
10-bit capture in H.265, H.264 or All-I H.264 gives workflow and grading flexibility
Log, HLG or Raw output options
Solid battery life
Choice of memory card format with most video recordable to SD or CFe
Nice viewfinder
Much-improved user interface
Option for HLG HDR photos using 10-bit HEIF format
Image stabilization, usable AF and good battery provides run-and-gun capabilities
Comfortable ergonomics
Full-sized HDMI socket
No out-of-camera DCI footage
Missing tools such as waveform display or shutter angle control
Video AF not quite effective as stills system and requires the screen to be tapped
12MP stills appear low res, even at reduced sizes, compared with most modern cameras
The topic of computational photography or really computational anything is a very interesting one and I have to say Jordan’s thoughts above got me thinking. The Sony A1 reads out at an impressive rate and many of the bottlenecks that would make computational photography difficult on a full-size camera are starting to fall away, but there are still bottlenecks to overcome and advantages to doing it on a computer instead of automatically in-camera.
Storage speed has always been a bottleneck limiting performance and while cfexpress is an improvement over anything else in the world of photography it is far from fast. This means that at least in the short term if manufacturers want to do computational photography in camera they will need a lot of high-speed storage built into the camera to work with files or photographer could just shoot a ton of frame and do the computations themselve manually, but more on that later.
In the above video DPRTV speaks about how smartphone cameras continuously take photos and then use the photo you take as a reference frame to do their magic, but most smartphones still have very low-resolution cameras on them and they are tightly integrated systems with extremely fast bus speeds and storage that allows the device to juggle more than a more modular system generally can. This design philosophy has matured quite a bit and we can now see the potential of it in the new Apple M1, which is kind of like a cellphone/tablet in laptop form.
When it comes to cameras Sony could integrate a few gigabytes of ultra-high-speed storage that doubles as a buffer for doing computational photography, but it’s going to add a lot of expense and the Sony A1 is already a very expensive camera. They could also build even more AI operations into their chips to speed up the processing of multiple images in-camera but in the short term, I hope we are going to see even more computer software developed to deal with hundreds or thousands of photos that can generate computational photos out of large amounts of data.
We are already starting to see this philosophy from a variety of editors and software packages on the market like TopazLabs, ON1 Photo RAW 2021, Photolemur, Picktorial, Luminar AI, Luminar 4, and Aurora HDR 2019 all have embraced AI to some extent. Even Adobe utilizes AI for some features now, but shooting styles haven’t really kept up with what software can do. For instance, when I first got my hands on the Sony a9 at launch one of the first things I wanted to try was just very sloppily shooting a massive HDR pano using the high-speed shooting feature while panning the camera all over the place until the buffer filled during a sunset. I let Adobe throw together 200+ RAW photos from that experiment and the results were shockingly good and too large (several gigabytes) to be shared, but it worked.
Decoupling the software aspect of computation photography from the camera might be a big advantage that dedicated cameras have over smartphones in the future once high-speed shooting becomes available on more cameras because computer software is easy to update pretty frequently and every camera manufacturer will be able to take advantage. There are also a lot more software companies competing than camera companies at the moment so the software should end up superior to anything Apple or Google can do with their smartphone operating systems and hardware constraints
If photographers really wanted computational photography they could have it today if they changed how they shot. Utilizing a machinegun style of shooting provides a lot of extra data to work with and software can be used to make the data useful, but right now you have to control for things like perspective, motion, and more, to maximize quality and keep user input to a minimum. This could all be done automatically in software in the future, but as of right now there isn’t much demand for extremely automated computational photography on the desktop.
A lot of fields collide when we start talking about the more technical aspects of digital photography and one of the above software companies could decide to develop a program to sort through hundreds of photos looking for common elements to combine, but I don’t know a single photographer demanding they do at this time. If photographers did we could see very impressive automated computational photography in the not too distant future on the computer and even something like resolution could become close to infinite for any megapixel camera just by combining multiple photos. Apple has even added a neural network to their M1 processor that could be utilized to speed up exactly this kind of task.
We are in the early days of computer-based computational photography, but there really is no reason why a computer couldn’t account and adjust for any and all movement in a frame to line up a few hundred photos and create photos with far more detail and color information then can be captured today in a single frame with even the most expensive camera. The sky really is the limit with computer-based computational photography, but software companies aren’t going to get close to what can be achieved unless photographers change how they shoot or are at least open to the idea.
Finally, I want to circle back to photographers. If you’re a JPEG shooter that wants your photos to just look right then you should be a fan of computational photography and you should demand that even the cheapest camera is capable of computational photography, because it will make your life easier. If you shoot RAW and the idea of computational photography interests you then you should start capturing lots of extra photos (data) for the future and demand that software companies automate the management and merger of large sets of RAW photos. If you don’t fit into either camp then congratulations you are where we are today and this article doesn’t matter to you because you either want nothing to do with computational photography or you use it selectively when you feel it is appropriate to stand out.
Imaging & Sensing Solutions (I&SS) Sales are expected to be higher than the October forecast primarily due to higher-than-expected unit sales of image sensors for mobile products and digital cameras. Operating income is expected to be significantly higher than the October forecast primarily due to the impact of the above-mentioned expected increase in sales and an 8.5 billion yen gain from the reversal of inventory write-downs of certain image sensors for mobile products previously recorded in the quarter ended September 30, 2020.
• FY20 Q3 sales decreased 10% year-on-year to 266.9 billion yen primarily due to lower sales of image sensors for mobile.
• Operating income decreased 24.8 billion yen to 50.4 billion yen primarily due to the impact of the decrease in sales and an increase in research and development expenses and depreciation.
• FY20 sales are expected to increase 50 billion yen compared to our previous forecast to 1 trillion 10 billion yen and operating income is expected to increase a significant 55 billion yen to 136 billion yen.