You say that you "noticed that industry participants generally responded more negatively compared to the other two groups.". And you seem concerned that this introduces a "baseline difference". But the baseline is exactly what you are measuring, and that difference is exactly what you want to find (if it exists).
If you had measured these perceived career barriers before, and then after some intervention, you would be concerned about a prior (baseline) bias in one group, if your goal was to measure the effect of the intervention. But you gave no indication of such an intervention. So all your data can measure is the baseline.
So the fact that industry participants are more negative in their views is what you should find. And when you think about it, it is intuitively reasonable. Students have no idea (yet) what a "real job" will look like, and thus are probably a bit too optimistic. Career faculty will never have any such idea (academia is not at all like corporate/industry work). And industry participants have some personal experience with this, and probably have a bit more of a sober perspective. Believe me, as a 40+ years industry participant, industry/corporate work can often be a real pain in the ... If it were not for the $$, many people would be in a different profession, or would switch employer more often.
Having said this, what can you do? You mention comparing the means. But you scores are Likert-type (7 point scale apparently). The issue here is that Likert scales are ordinal-scale, not interval or ration scales. So computing means or standard deviations is, strictly speaking, not permissible (e.g. see here on CV, or here, or here, and many more places). YOu would certainly not be the first to use tests of the means on Likert-scale data, and you will -sadly- not be the last, but it is, at best, debatable,
You mention tests of the ranks, which is a very reasonable idea, since ranks can be used with ordinal data. There are 2 "classical" such tests; Mann-Whitney U test (MWUt), for comparing 2 samples, or Kruskal-Wallis test (KWt), for 2 or more samples.
Now, the first thing to nore is that neither MWUt, nor KWt are tests of means, or medians (as unfortunately way too often described), but are tests of *stochastic equality". I quote from wiki's page on MWUt
The probability of an observation from population X exceeding an observation from population Y is different (larger, or smaller) than the probability of an observation from Y exceeding an observation from X; i.e., P(X > Y) ≠ P(Y > X) or P(X > Y) + 0.5 · P(X = Y) ≠ 0.5.
As long as you present your results that way (e.g. "the students population was found to be significantly stochastically superior to the industry participant population, but not to the faculty population"), these tests will compare your data among the 3 groups, and are appropriate tests for ordinal data (because they only compare ranks).
Now, wrt exactly which test to use? You have 3 groups, so KWt would be a natural choice. But it is an omnibus test, so all it will tell you is that 1 sample is stochastically superior to another one, but will not tell you which they are; you will need to run post-hoc 2 sample tests to find this out. Moreover, KWt suffers from the Behrens-Fisher problem (inflated Type I errors, or seriously reduced power, when variances/sample sizes are not equal). Last, stochastic superiority is a non transitive property (i.e. you can observe "rock/paper/scissors" situations were sample A is stochastically superior to B, which is superior to C, but then C is superior to A). In such cases KWt will lose almost all power (excellent paper on this situation). So I am not a big fan of KWt; too many caveats.
Instead, in your case (only 3 comparisons, maybe in fact just 2? Students do not seem different from faculty...), I would use 3 (or 2) MWUt, or in fact Bruner-Munzel tests (BMt) (because MWUt also suffers from the Behrens-Fisher issue, while BMt does not, and otherwise works exactly like the MWUt (think of it as a "corrected MWUt). You may find this paper relevant. Since you will be making multiple comparisons, you should use a multiple comparison correction (MCC). With such a small number of comparisons, Bonferroni seems appropriate ($\alpha=.025 -for 2 comparisons-, or .0166 -for 3).