This profile is from a federated server and may be incomplete. Browse more on the original instance.
HPC_Guru, 3 months ago to hpc 64K Kernel Page Size Performance Benefits For #HPC Refreshed This round includes NVIDIA's GH200, along with the AMD & Intel CPUs Linux 6.8 kernel performance with a 64K page size improved on average by about 15% https://www.phoronix.com/review/aarch64-64k-kernel-perf
64K Kernel Page Size Performance Benefits For #HPC Refreshed
This round includes NVIDIA's GH200, along with the AMD & Intel CPUs
Linux 6.8 kernel performance with a 64K page size improved on average by about 15%
https://www.phoronix.com/review/aarch64-64k-kernel-perf
HPC_Guru, 3 months ago @feld My understanding is that to get a larger page size than 64K in Linux, one would have to enable huge pages Huge pages can use larger memory blocks (e.g., 2MB or 1GB) https://blog.netdata.cloud/understanding-huge-pages/
@feld My understanding is that to get a larger page size than 64K in Linux, one would have to enable huge pages
Huge pages can use larger memory blocks (e.g., 2MB or 1GB)
https://blog.netdata.cloud/understanding-huge-pages/
HPC_Guru, 7 months ago to ai Nvidia is prepping three new #GPUs for #AI and #HPC applications tailored for China and to comply with U.S. export requirements o HGX H20 o L20 PCle o L2 PCle https://www.tomshardware.com/tech-industry/nvidia-readies-new-ai-and-hpc-gpus-for-china-market-report #GPU
Nvidia is prepping three new #GPUs for #AI and #HPC applications tailored for China and to comply with U.S. export requirements
o HGX H20
o L20 PCle
o L2 PCle
https://www.tomshardware.com/tech-industry/nvidia-readies-new-ai-and-hpc-gpus-for-china-market-report
#GPU
HPC_Guru, 7 months ago @johnefrancis US export controls. My guess is they fear that these countries might re-export to China.
@johnefrancis US export controls. My guess is they fear that these countries might re-export to China.
HPC_Guru, 7 months ago to ai The shortages of #AI skills and #GPU compute engines is the best thing that could have happened for companies like Amazon and its #AWS #cloud @awscloud gears up to profit mightily from the #GenAI boom https://www.nextplatform.com/2023/10/31/amazon-gears-up-to-profit-mightily-from-the-generative-ai-boom/ #HPC #AI
The shortages of #AI skills and #GPU compute engines is the best thing that could have happened for companies like Amazon and its #AWS #cloud
@awscloud gears up to profit mightily from the #GenAI boom
https://www.nextplatform.com/2023/10/31/amazon-gears-up-to-profit-mightily-from-the-generative-ai-boom/
#HPC #AI
HPC_Guru, 7 months ago TPM makes a good point about one big difference between the #DotCom boom and the #GenAI boom: DotCom: Modest compute requirements - x86 server + Linux GenAI: High cost and shortage of GPU and low latency, high bandwidth networking required to create and use LLMs #HPC #AI
TPM makes a good point about one big difference between the #DotCom boom and the #GenAI boom:
DotCom: Modest compute requirements - x86 server + Linux
GenAI: High cost and shortage of GPU and low latency, high bandwidth networking required to create and use LLMs
HPC_Guru, 8 months ago to hpc Need to move from 1% efficient to 30% efficient in 5-7 years Gary Grider at LosAlamosNatLab updates the #HPC User Forum on the challenges of procuring LANL #supercomputers for complex workflows and not just for peak ops or HPL https://hpcuserforum.com/wp-content/uploads/2023/09/Gary-Grider-LANL-Platform-Planning-and-Update.pdf
Need to move from 1% efficient to 30% efficient in 5-7 years
Gary Grider at LosAlamosNatLab updates the #HPC User Forum on the challenges of procuring LANL #supercomputers for complex workflows and not just for peak ops or HPL
https://hpcuserforum.com/wp-content/uploads/2023/09/Gary-Grider-LANL-Platform-Planning-and-Update.pdf
HPC_Guru, 8 months ago #HPC systems are becoming less balanced o <1% of the FLOPS are useful o Memory access and branching efficiency are more important
#HPC systems are becoming less balanced
o <1% of the FLOPS are useful
o Memory access and branching efficiency are more important