https://huggingface.co/spaces/prs-eth/thera
https://github.com/prs-eth/thera
3Dprinting (174) A.I. (708) animation (337) blender (196) colour (229) commercials (49) composition (151) cool (359) design (630) Featured (68) hardware (306) IOS (109) jokes (134) lighting (280) modeling (123) music (184) photogrammetry (175) photography (749) production (1246) python (85) quotes (485) reference (309) software (1323) trailers (295) ves (534) VR (219)
https://github.com/tencent/Hunyuan3D-2
Hunyuan3D 2.0, an advanced large-scale 3D synthesis system for generating high-resolution textured 3D assets. This system includes two foundation components: a large-scale shape generation model – Hunyuan3D-DiT, and a large-scale texture synthesis model – Hunyuan3D-Paint.
The shape generative model, built on a scalable flow-based diffusion transformer, aims to create geometry that properly aligns with a given condition image, laying a solid foundation for downstream applications. The texture synthesis model, benefiting from strong geometric and diffusion priors, produces high-resolution and vibrant texture maps for either generated or hand-crafted meshes. Furthermore, we build Hunyuan3D-Studio – a versatile, user-friendly production platform that simplifies the re-creation process of 3D assets.
It allows both professional and amateur users to manipulate or even animate their meshes efficiently. We systematically evaluate our models, showing that Hunyuan3D 2.0 outperforms previous state-of-the-art models, including the open-source models and closed-source models in geometry details, condition alignment, texture quality, and e.t.c.
Based on the new Blackmagic URSA Cine platform, the new Blackmagic URSA Cine Immersive model features a fixed, custom lens system with a sensor that features 8160 x 7200 resolution per eye with pixel level synchronization. It has an extremely wide 16 stops of dynamic range, and shoots at 90 fps stereoscopic into a Blackmagic RAW Immersive file. The new Blackmagic RAW Immersive file format is an enhanced version of Blackmagic RAW that’s been designed to make immersive video simple to use through the post production workflow.
Resolution – Aspect Ratio | 4:03 | 16:09 | 16:10 | 3:02 | 5:03 | 5:04 |
CGA | 320 x 200 | |||||
QVGA | 320 x 240 | |||||
VGA (SD, Standard Definition) | 640 x 480 | |||||
NTSC | 720 x 480 | |||||
WVGA | 854 x 450 | |||||
WVGA | 800 x 480 | |||||
PAL | 768 x 576 | |||||
SVGA | 800 x 600 | |||||
XGA | 1024 x 768 | |||||
not named | 1152 x 768 | |||||
HD 720 (720P, High Definition) | 1280 x 720 | |||||
WXGA | 1280 x 800 | |||||
WXGA | 1280 x 768 | |||||
SXGA | 1280 x 1024 | |||||
not named (768P, HD, High Definition) | 1366 x 768 | |||||
not named | 1440 x 960 | |||||
SXGA+ | 1400 x 1050 | |||||
WSXGA | 1680 x 1050 | |||||
UXGA (2MP) | 1600 x 1200 | |||||
HD1080 (1080P, Full HD) | 1920 x 1080 | |||||
WUXGA | 1920 x 1200 | |||||
2K | 2048 x (any) | |||||
QWXGA | 2048 x 1152 | |||||
QXGA (3MP) | 2048 x 1536 | |||||
WQXGA | 2560 x 1600 | |||||
QHD (Quad HD) | 2560 x 1440 | |||||
QSXGA (5MP) | 2560 x 2048 | |||||
4K UHD (4K, Ultra HD, Ultra-High Definition) | 3840 x 2160 | |||||
QUXGA+ | 3840 x 2400 | |||||
IMAX 3D | 4096 x 3072 | |||||
8K UHD (8K, 8K Ultra HD, UHDTV) | 7680 x 4320 | |||||
10K (10240×4320, 10K HD) | 10240 x (any) | |||||
16K (Quad UHD, 16K UHD, 8640P) | 15360 x 8640 |
https://www.discovery.com/science/mexapixels-in-human-eye
About 576 megapixels for the entire field of view.
Consider a view in front of you that is 90 degrees by 90 degrees, like looking through an open window at a scene. The number of pixels would be:
90 degrees * 60 arc-minutes/degree * 1/0.3 * 90 * 60 * 1/0.3 = 324,000,000 pixels (324 megapixels).
At any one moment, you actually do not perceive that many pixels, but your eye moves around the scene to see all the detail you want. But the human eye really sees a larger field of view, close to 180 degrees. Let’s be conservative and use 120 degrees for the field of view. Then we would see:
120 * 120 * 60 * 60 / (0.3 * 0.3) = 576 megapixels.
Or.
7 megapixels for the 2 degree focus arc… + 1 megapixel for the rest.
https://clarkvision.com/articles/eye-resolution.html
Details in the post
When collecting hdri make sure the data supports basic metadata, such as:
In image processing, computer graphics, and photography, high dynamic range imaging (HDRI or just HDR) is a set of techniques that allow a greater dynamic range of luminances (a Photometry measure of the luminous intensity per unit area of light travelling in a given direction. It describes the amount of light that passes through or is emitted from a particular area, and falls within a given solid angle) between the lightest and darkest areas of an image than standard digital imaging techniques or photographic methods. This wider dynamic range allows HDR images to represent more accurately the wide range of intensity levels found in real scenes ranging from direct sunlight to faint starlight and to the deepest shadows.
The two main sources of HDR imagery are computer renderings and merging of multiple photographs, which in turn are known as low dynamic range (LDR) or standard dynamic range (SDR) images. Tone Mapping (Look-up) techniques, which reduce overall contrast to facilitate display of HDR images on devices with lower dynamic range, can be applied to produce images with preserved or exaggerated local contrast for artistic effect. Photography
In photography, dynamic range is measured in Exposure Values (in photography, exposure value denotes all combinations of camera shutter speed and relative aperture that give the same exposure. The concept was developed in Germany in the 1950s) differences or stops, between the brightest and darkest parts of the image that show detail. An increase of one EV or one stop is a doubling of the amount of light.
The human response to brightness is well approximated by a Steven’s power law, which over a reasonable range is close to logarithmic, as described by the Weber�Fechner law, which is one reason that logarithmic measures of light intensity are often used as well.
HDR is short for High Dynamic Range. It’s a term used to describe an image which contains a greater exposure range than the “black” to “white” that 8 or 16-bit integer formats (JPEG, TIFF, PNG) can describe. Whereas these Low Dynamic Range images (LDR) can hold perhaps 8 to 10 f-stops of image information, HDR images can describe beyond 30 stops and stored in 32 bit images.
For decades, LiDAR and 3D sensing systems have relied on mechanical mirrors and bulky optics to direct light and measure distance. But at CES 2025, Lumotive unveiled a breakthrough—a semiconductor-based programmable optic that removes the need for moving parts altogether.
LiDAR and 3D sensing systems work by sending out light and measuring when it returns, creating a precise depth map of the environment. However, traditional systems have relied on physically moving mirrors and lenses, which introduce several limitations:
To bring high-resolution depth sensing to wearables, smart devices, and autonomous systems, a new approach is needed.
Lumotive’s Light Control Metasurface (LCM) replaces mechanical mirrors with a semiconductor-based optical chip. This allows LiDAR and 3D sensing systems to steer light electronically, just like a processor manages data. The advantages are game-changing:
LCM technology works by controlling how light is directed using programmable metasurfaces. Unlike traditional optics that require physical movement, Lumotive’s approach enables light to be redirected with software-controlled precision.
This means:
At CES 2025, Lumotive showcased how their LCM-enabled sensor can scan a room in real time, creating an instant 3D point cloud. Unlike traditional LiDAR, which has a fixed scan pattern, this system can dynamically adjust to track people, objects, and even gestures on the fly.
This is a huge leap forward for AI-powered perception systems, allowing cameras and sensors to interpret their environment more intelligently than ever before.
Lumotive’s programmable optics have the potential to disrupt multiple industries, including:
Lumotive’s Light Control Metasurface represents a fundamental shift in how we think about optics and 3D sensing. By bringing programmability to light steering, it opens up new possibilities for faster, smarter, and more efficient depth-sensing technologies.
With traditional LiDAR now facing a serious challenge, the question is: Who will be the first to integrate programmable optics into their designs?
https://nielscautaerts.xyz/python-dependency-management-is-a-dumpster-fire.html
For many modern programming languages, the associated tooling has the lock-file based dependency management mechanism baked in. For a great example, consider Rust’s Cargo.
Not so with Python.
The default package manager for Python is pip. The default instruction to install a package is to run pip install package
. Unfortunately, this imperative approach for creating your environment is entirely divorced from the versioning of your code. You very quickly end up in a situation where you have 100’s of packages installed. You no longer know which packages you explicitly asked to install, and which packages got installed because they were a transitive dependency. You no longer know which version of the code worked in which environment, and there is no way to roll back to an earlier version of your environment. Installing any new package could break your environment.
…
https://github.com/comfyanonymous/ComfyUI
https://comfyui-wiki.com/en/install
https://stable-diffusion-art.com/comfyui
https://github.com/LykosAI/StabilityMatrix
https://github.com/ltdrdata/ComfyUI-Manager
https://github.com/comfyanonymous/ComfyUI
https://github.com/ltdrdata/ComfyUI-Manager
https://www.thinkdiffusion.com
Videos, shortcuts and details in the post!
COLLECTIONS
| Featured AI
| Design And Composition
| Explore posts
POPULAR SEARCHES
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke
FEATURED POSTS
Social Links
DISCLAIMER – Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.