Since the release of PhysX SDK 4.0 in December 2018, NVIDIA PhysX has been available as open source under the BSD-3 license—with one key exception: the GPU simulation kernel source code was not inc...
I mean, does it work worse? UE4/Havok and Unigine all use CPU Physx. And every other engine I know of uses a custom particle physics implementation and seem far better at it than GPU Physx ever was.
On GPU I remember physx being super buggy since the GPU calculations were very low precision, and that was if you had an Nvidia card. It made AMD cards borderline unplayable in many games that were doing extensive particle physics for no other reason than to punish AMD in benchmarks.
Not trying to be rude, but that’s a question of how the engine uses the CPU vs GPU implementation, not a measure of apples to apples.
Comparing modern games with CPU particle physics to the heyday of GPU Physx there is no comparison. CPU physics (and Physx) are more accurate, less buggy, and generally not impactful in performance.
Not in my experience. On the CPU it only uses one core for me.
I mean, does it work worse? UE4/Havok and Unigine all use CPU Physx. And every other engine I know of uses a custom particle physics implementation and seem far better at it than GPU Physx ever was.
On GPU I remember physx being super buggy since the GPU calculations were very low precision, and that was if you had an Nvidia card. It made AMD cards borderline unplayable in many games that were doing extensive particle physics for no other reason than to punish AMD in benchmarks.
For AMD it was executed on one core of the CPU. So the problems you’re talking of, with AMD cards is exactly what I mean.
Not trying to be rude, but that’s a question of how the engine uses the CPU vs GPU implementation, not a measure of apples to apples.
Comparing modern games with CPU particle physics to the heyday of GPU Physx there is no comparison. CPU physics (and Physx) are more accurate, less buggy, and generally not impactful in performance.