Amazed that this old question lacks the obvious simple Awk solution:
find . -type f -exec awk '/string1/ && /string2/ { print; r=1 } END { exit 1-r }' {} \;
The trickery with the r variable is just to emulate the exit code from grep (zero means found, one means not; if you don't care, you can take that out).
For efficiency, maybe switch from -exec ... {} \; to -exec ... {} + though then you might want to refactor the Awk script a bit (either throw out the exit code, or change it so the exit code indicates something like "no files matched" vs "only some files matched" vs "all files matched"?)
The above code looks for files which contain both strings on the same line. The case of finding them on any lines is an easy change.
awk '/string1/ { s1=1 } /string2/ { s2=1 } s1 && s2 { print FILENAME; exit } END { exit(1 - (s1 && s2)) }' file
This just prints the name of the file, and assumes that you have a single input file. For processing multiple files, refactor slightly, to reset the values of s1 and s2 when visiting a new file:
awk 'FNR == 1 { s1 = s2 = 0 } /string1/ { s1 = 1 } /string2/ { s2 = 1 } s1 && s2 { r=1; print FILENAME; nextfile } END { exit 1-r }' file1 file2 file3 ...
Some ancient Awk versions might not support nextfile, though it is now in POSIX.
egrep -l "string1|string2"gives all the files which containstring1ORstring2, in case a parameter exist in order to makeegrep -l "string1 <parameter> string2"give the files which containstring1ANDstring2, your question would be solved. (I don't know if such a parameter exists, though)&just like|corresponds to intersection, but no common regex tools implement this. The easy fix isawk '/pattern1/ && /pattern2/'so there is already a simple way to do exactly this, albeit not withgrep.