Skip to content

Conversation

@boomanaiden154
Copy link
Contributor

This is intended to catch failures like
https://lab.llvm.org/staging/#/builders/192/builds/1671 (LLDB formatter related) before they end up landing in tree.

@boomanaiden154 boomanaiden154 marked this pull request as ready for review August 25, 2025 17:46
@boomanaiden154
Copy link
Contributor Author

@ldionne @philnik777 @EricWF Any thoughts on this?

@philnik777
Copy link
Contributor

I thought the lldb tests should already be run in the bootstrapping build?

@boomanaiden154
Copy link
Contributor Author

It doesn't look like it. #94379 landed with all CI passing despite there being an LLDB failure that the premerge bots then caught. https://lab.llvm.org/staging/#/builders/192/builds/4005

We can do it either way, but I think having the coverage here is important, ideally to coordinate patches between LLDB and libc++.

Copy link
Member

@Michael137 Michael137 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm libc++ folks had an issue with this back when we discussed running LLDB tests on libc++ CI. Mainly because of how long they might take to run. That's why we only run the data-formatter tests in the libc++ CI (#88312). But maybe that changed (or maybe I'm misunderstanding this current PR)

@Michael137
Copy link
Member

Oh I see @philnik777 already chimed in.

I think i see what happened.

The test that failed was skipped on Linux for the longest time. I re-enabled it very recently specifically to catch libc++<->LLDB breakages: #157649

But the pre-merge CI on the libc++ PR probably ran without being rebased on that change. So the test was skipped.

I think at this point all the libc++-specific tests in LLDB are run as part of the bootstrapping build

@boomanaiden154
Copy link
Contributor Author

But the pre-merge CI on the libc++ PR probably ran without being rebased on that change. So the test was skipped.

Github actions will run the changes as if they were merged into main, so the branch doesn't need to be rebased. It looks like the CI run was more stale than the landing of that PR though (the CI run was five days old, the PR landed four days ago), so it never picked it up.

If this is running as part of the bootstrapping build, that seems reasonable enough to me. I just want to make sure we aren't missing test coverage for these sorts of changes. This is the second or third time the lldb data formatters for libc++ have broken all of premerge for an extended length of time.

@Michael137
Copy link
Member

It looks like the CI run was more stale than the landing of that PR though (the CI run was five days old, the PR landed four days ago), so it never picked it up.

Agreed

If this is running as part of the bootstrapping build, that seems reasonable enough to me. I just want to make sure we aren't missing test coverage for these sorts of changes. This is the second or third time the lldb data formatters for libc++ have broken all of premerge for an extended length of time.

Pretty sure if we tried relanding #94379, the pre-merge CI would fail on it (would be good to confirm :))

@ldionne
Copy link
Member

ldionne commented Dec 1, 2025

I don't have latency or capacity concerns with this patch since the LLVM-wide checks use a different set of runners, and these checks run quite quickly.

My main concern would be that running both CI jobs defined by libc++ and the monorepo-wide CI jobs provides little value (I guess I am claiming that) while being confusing for people, since it's unclear what CI checks they should care about.

If the only goal of this PR was to ensure the LLDB data formatters are being run on libc++ changes, they are, so I think we can close this. If there's more to it, I'd also be fine with landing the patch and running the LLVM-wide CI jobs in addition to libc++'s own CI jobs.

@boomanaiden154
Copy link
Contributor Author

If the only goal of this PR was to ensure the LLDB data formatters are being run on libc++ changes, they are, so I think we can close this.

That was the goal, and I think #157649 will enable catching the others. I haven't seen this failure mode in a while, so we can close for now and figure out what the fidelity issue actually is if this comes up again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

5 participants