In comments, @Tim writes:
Certainly it is possible that loss decreases and accuracy stays the same (loss defined in terms of probabilities vs discrete accuracy). E.g. target is [0, 1] and predicted probabilities are [0.1, 0.1] in first case and [0.45, 0.45] in second case. Say that you use >0.5 decision rule for classification, so the classifications won't change (and so accuracy), but loss possibly will (say log-loss or squared loss).
I've copied this comment as a community wiki answer because the comment is, more or less, an answer to this question. We have a dramatic gap between answers and questions. At least part of the problem is that some questions are answered in comments: if comments which answered the question were answers instead, we would have fewer unanswered questions.
Please review