Using a 50:50 split is usually not recommended. People usually keep more data for training and less data for testing/validation.
The more train data you have, the better the model captures different scenarios of data and the more test data you have, the better your trained model will be evaluated. So it's a trade off between the two, and ultimately you have to decide which do you prefer more.
Since you mention you have a large dataset, choosing 50:50 wouldn't be that much of a problem compared to if you has a small dataset. But there would still be some data patterns you would be missing which in turn will make your model less generalizable so keep that in mind!
A possible solution to this trade off is cross-validation (preferably nested cross-validation). That way even if you have less training data, your model will use all of that data in the best possible way.
Cheers!