You can accomplish this using DeleteDuplicatesBy, by first taking your two input lists and making a matrix out of them, and then deleting the rows where the last element (the element that came from myNewList) is a duplicate. Then you transpose back and assign the sublists of the reduced matrix to the new lists you want.
{myListDuplicatesDeleted, myNewListReduced} = Transpose@DeleteDuplicatesBy[Transpose@{myList, myNewList}, Last] (* {{1, 3}, {7, 2}} *)
Or, slightly shorter and only returning the myListDuplicatesDeleted list,
First /@ DeleteDuplicatesBy[Transpose@{myList, myNewList}, Last]
And why not have yet another option to do this, we can use GatherBy. I'm writing this in two equivalent ways to further confusion - the first uses Part, which I always prefer, and the second uses First, which people use a lot.
GatherBy[Transpose@{myList, myNewList}, Last][[All, 1, 1]] Map[First, GatherBy[Transpose@{myList, myNewList}, Last], 2]
Comparing the available methods,
First /@ DeleteDuplicatesBy[Transpose@{myList, myNewList}, Last] // RepeatedTiming Map[First, GatherBy[Transpose@{myList, myNewList}, Last], 2] // RepeatedTiming (* method by garej *) Lookup[Thread[myNewList -> myList], DeleteDuplicates@myNewList] // RepeatedTiming (* method by Kuba *) Module[{f}, f[x_] := (f[x] = 0; 1); Pick[myList, f /@ myNewList, 1]] // RepeatedTiming (* method by Hubble07 *) Module[{dups = DeleteDuplicates[myNewList], pos}, pos = Flatten[ First@Position[myNewList, dups[[#]]] & /@ Range[Length[dups]]]; myList[[pos]] ]; // RepeatedTiming (* {0.0000131, {1, 3}} *) (* {5.7*10^-6, {1, 3}} *) (* {2.6*10^-6, {1, 3}} *) (* {0.00001488, {1, 3}} *) (* {0.0000192, {1, 3}} *)
It appears that garej's method is the fastest (I still don't really understand it, but I have a blind spot for associations). But for a list this size they are all pretty equivalent, what about for a much larger list? Let's look at the timing for the GatherBy and Lookup methods for a couple of cases. First, we have a large number of elements (100,000) in both lists, but only 200 unique elements in the myNewList. In this case, the Lookup method is still faster (by 30%)
myList = Range[100000]; myNewList = RandomInteger[200, 100000]; Map[First, GatherBy[Transpose@{myList, myNewList}, Last], 2]; // RepeatedTiming Lookup[Thread[myNewList -> myList], DeleteDuplicates@myNewList]; // RepeatedTiming (* {0.0537, Null} *) (* {0.038, Null} *)
But what if we had just as many elements, but now we have far more unique elements in myNewList. In this case, the cost of using Lookup comes into play,
myList = Range[100000]; myNewList = RandomInteger[50000, 100000]; Map[First, GatherBy[Transpose@{myList, myNewList}, Last], 2]; // RepeatedTiming Lookup[Thread[myNewList -> myList], DeleteDuplicates@myNewList]; // RepeatedTiming (* {0.11, Null} *) (* {12.6, Null} *)
For this case, with a large list and many unique elements, Kuba's Pick method scales nearly as well as the GatherBy method, while Hubble07's straightforward method scales poorly.
Edit The fastest method for doing this on a large list is garej's AssociationThread method, which when applied to the above large list gives
Values@ AssociationThread[myNewList -> myList]; // RepeatedTiming (* {0.031, Null} *)
This method fulfills the OP's requirements, but goes about it in an unintuitive way by keeping the last association rather than the first. Consider these two lists,
myList = Range[10] myNewList = Range[5, 7]~Join~Range[7] (* {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} *) (* {5, 6, 7, 1, 2, 3, 4, 5, 6, 7} *)
and the two new lists that can be made from these lists
myList = Range[10] myNewList = Range[5, 7]~Join~Range[7] (* {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} *) (* {5, 6, 7, 1, 2, 3, 4, 5, 6, 7} *) Map[First, GatherBy[Transpose@{myList, myNewList}, Last], 2] Values@AssociationThread[myNewList -> myList] (* {1, 2, 3, 4, 5, 6, 7} *) (* {8, 9, 10, 4, 5, 6, 7} *)
So for the value 5 which was duplicated in the myNewList, the AssociationThread method keeps the 8 from myList instead of the 1, which all the other methods pick. This is fine and fits the requirements. But while AssociationThread picks the correct elements to keep, it also changes their order, which may or may not be acceptable.