Radeon Instinct mi60 - Double float is usable on GPU Projects Y/N
Usable in the best Servers & MiPower Blades (TM)
https://www.amd.com/system/files/documents/radeon-instinct-mi60-datasheet.pdf
PERFORMANCE Compute Units Stream Processors Peak INT8 Peak FP16 Peak FP32 Peak FP64 Bus Interface
64 4,096 Up to 59.0 TOPS Up to 29.5 TFLOPS Up to 14.7 TFLOPS Up to 7.4 TFLOPS PCIe®Gen 3 and Gen 4 Capable
https://www.amd.com/en/products/professional-graphics/instinct-mi60
https://www.amd.com/system/files/documents/lawrence-livermore-national-laboratory-case-study.pdf
https://www.amd.com/en/case-studies/lawrence-livermore-national-laboratory
https://www.amd.com/en/processors/server-tech-docs
Regards RS
https://science.n-helix.com
Copyright © 2024 Einstein@Home. All rights reserved.
yes, its the datacenter
)
yes, its the datacenter radeon VII.
Works very well, most jobs only have partial or small amounts of fp64, so don't expect astronomical benefits here.
But probably the compute/datacenter drivers are a lot better for low overhead, so I would expect you to blow the VII away, and that one already blows almost all other away for compute.
So what are the key
)
So what are the key differences ? peter
Key differences between
)
Key differences between radeon VII and Radeon Instinct MI 50
Radeon Instinct MI 60 is slightly faster in all aspects as it has a little more compute units (64 in MI 60 vs 60 in MI 50 and radeon VII) and twice as much VRAM volume. radeon VII is a consumer/gamer version of MI 50.
But actually all these differences mean not too much in terms of BOINC calculations. As most of software here developed to use FP64 at minimum level (or do not use it all if it possible) and intended to use with standard ("gamers") drivers capabilities. Also i did not hear about any multi-GPU support - each GPU work independently.
So you can expect from radeon VII almost same results as from Radeon Instinct. May be 5-20% slower depended of app/project.
The only exception will be projects that fully rely on double-precision calculations. At the moment I only know one such - MilkyWay@Home.
What the previous poster told
)
What the previous poster told was a good explanation
Lastly just to add, the data center versions are the Top binned chips, they are the best working chips that need the least amount of voltage to operate on as high as possible frequency.
The mi50 has the lesser working chips that have small defects that result in inoperable shades, need more voltage and achieves a less high frequency.
All the chips that remain can go in the trash,...… or in consumer cards where voltage is less important because we have big coolers and nothing like datacenter density in racks. Frequency is achieved by just giving even more voltage. And of course inoperable shaders and disabled double precision and so the VII is born.
Same goes for all brands, cards and other silicon like cpu's.
1 Datacenter 1500-15000 per gpu
2 professional workstation 300-6000 per gpu
3 gamers and other people, 75-1200 per gpu.
In the end its the same gpu die, only in more crippled and worse condition.
So lots of room for decent
)
So lots of room for decent models to sift down to the "consumer market" afterall are we gamers? do we qualify with only 5 years of Quake N7 or elite playtime :p
But yes einstein@home is a most exciting way to explore the stars.
Binning being the only way datacenters can actually affort to play with complex tools like the Epyc
Its not as much that Epyc is
)
Its not as much that Epyc is complex, just 4 standard ryzen dies in a package, nothing more.
But the environment of the datacenter is so much more punishing, rack density is going true the roof with al the gpu accelerated nodes, often the cooling is problematic because of the low height of the rack enclosure and again density. It needs to be able to run 24/7 for years without failure and is much more mission critical than consumer orientated hardware.
But we@home can contribute a lot in this way, and at hearth we are all nerds and geeks because,.... well we contribute and love these kind of things
I am very curious how much improved performance compute orientated drivers bring to the table.
At present I am very far away from reaching my VII max theoretical flops in the real world
https://wccftech.com/amd-ryze
)
https://wccftech.com/amd-ryzen-3000-cpus-7nm-zen-2-first-epyc-rome-q3-2019-launch/
https://wccftech.com/playstation-5-special-sauce/
Recent rumor revealed that the console’s devkit is running a nearly 13 teraflops GPU. What we know for sure is that the console will be powered by a CPU based on the third generation of AMD’s Ryzen line and by a GPU that will support ray tracing.
https://wccftech.com/amd-confirms-navi-gpu-launching-in-q3-2019/
Open Source Driver for Vulkan
)
Open Source Driver for Vulkan : Debian/Ubuntu/Linux
https://github.com/GPUOpen-Drivers/AMDVLK
run this after downloading file: https://is.gd/Install_gpl_amd_drivers_sh
sudo chmod 774 Install-gpl-amd-drivers.sh
sudo ./Install-gpl-amd-drivers.sh
****
GL to Vulkan : gfx-portability : Prototype library implementing Vulkan Portability Initiative using gfx-hal. See gfx-rs meta issue for backend limitations and further details.
https://github.com/gfx-rs/portability
****
OpenGLES/OpenCL/OpenGL/Vulkan API : Mac:Windows:Linux:Android
https://developer.arm.com/tools-and-software/graphics-and-gaming/graphics-development-tools/opengl-es-emulator
Mac Vulkan Open CL
https://github.com/KhronosGroup/MoltenVK/releases/tag/v1.0.35
Usable for coding:
https://github.com/KhronosGroup/OpenCL-ICD-Loader
https://github.com/KhronosGroup/OpenCL-CLHPP
****
Texture & polygon optimiser & compressor : may be useful for Einstein development
https://github.com/GPUOpen-Tools/Compressonator/releases
https://github.com/KhronosGroup/glTF-Compressonator
Useful for phone development:
WebCLGL : Libraries & JS
https://github.com/stormcolor/webclgl
https://github.com/stormcolor/webclgl/blob/master/dist/webclgl/WebCLGL.min.js
WebCLGL use WebGL2 specification to interpret code.
WebGL is used like OpenCL for GPGPU calculus using the traditional Render To Texture technique.
****
WebGL Compute
https://www.khronos.org/registry/webgl/specs/latest/2.0-compute/#diff-with-gles31
https://www.khronos.org/assets/uploads/developers/library/2017-webgl-webinar/Khronos-Webinar-WebGL-20-is-here_What-you-need-to-know_Apr17.pdf