An approach I use to avoid computing particle positions on the CPU and updating buffers on the GPU is to instead procedurally generate everything on the GPU's shaders.
If you create a buffer of random numbers, and another random number for the particular particle effect instance, you can use these numbers in your shader to to decide, for any given time point, where a particle would and the rest of its state.
And although you are re-using this exact same little VBO for each and every particle system, and many may be visible on-screen at the same time, no two particle systems look the same!
I've used it for 2D flame effects and even for particles that follow moving objects in 3D. There are obvious limitations - particles that react with other things in the scene e.g. bounce off walls - cannot be modelled easily this way.
Here's some example code for a flame-effect vertex shader from my barebones.js engine:
attribute float aLifetime; attribute float aXPos; attribute float aYSpeed; attribute vec2 aColor; uniform mat4 mvp; uniform float uTime; uniform float uPointSize; varying float vLifetime; varying vec2 color; uniform float randseed; float rand(float f) { return fract(456.789*sin(789.123*f*randseed)*(1.+f)); } void main(void) { float lifetime = mix(0.0,2.0,rand(aLifetime))+1.0; vLifetime = mod(uTime,lifetime); float ti = 1. - vLifetime/lifetime; gl_Position = mvp * vec4(mix(-1.0,1.0,rand(aXPos))*ti, mix(0.0,0.7,rand(aYSpeed))*vLifetime - 1., 0., 1.); vLifetime = 4.*ti*(1. - ti); color = aColor; gl_PointSize = uPointSize; } For 3D effects, here's howhere's how to scale the point size with distance from camera.