While Spark supports a limited predicate pushdown over JDBC all other operations, like limit, group, aggregations are performed internally. Unfortunately it means that take(4) will fetch complete table first and then apply the limit.
If you want to push limit to the database you'll have to do it statically using subquery as a dbtable parameter:
(sqlContext.read.format('jdbc') .options(url='xxxx', dbtable='(SELECT * FROM xxx LIMIT 4) tmp`, ....))