Without having an own server in the back, there are solutions with multihomed default route and policy based routing. Basically, one connection still uses just one uplink then, but another connection can use another uplink, giving performance improvements when multiple connections are active in parallel.
A multihomed default route is simple:
ip route replace default nexthop dev ppp0 weight 1 nexthop dev ppp1 weight 1
but most probably it will not suffice since answers to packets coming in over one link might get out over another link -- and most probably won't be recognised by the other end then.
This is where policy based routing comes into play; there are many guides in the internet out there on that, for example here.
If you have your own server in the back, you can set something "on top" which re-combines both connections, and gives real almost-double bandwidth even for single connections. I currently have success with tunneling multilink ppp over ssh (principle), although TCP over TCP is not so good (one could use netcat or socat instead of ssh as backend). For that I have configured my server that I can launch pppd with sudo without the need for a password, and on my client I run something like:
pppd nodetach local debug noauth multilink eap-timeout 90 \ pty "ssh -b 10.220.105.203 -p 333 <user>@<server> -t -e none sudo pppd noauth multilink eap-timeout 90" \ 10.12.13.2:10.12.13.1
(ssh will still ask me for a password this way.)
Other solutions I tried but could not quite get to work yet include multilink ppp over vtun (because the latter segfaults on my client) or vtrunkd (was unspecific-unreliable).
And there might also be ways to use the bonding or teaming driver with tap-interfaces.