![]() Overall this seems like a marked improvement. Tragically, most of the time in all of these jobs is spent justĬhecking out the repository and submodules (see added the enhancement label on Dec 20, 2022. but we only need the blobs for the current branch. To the right of the editor, use the GitHub Marketplace sidebar to browse actions. Other than if you happen to look at the UI). I also like to see, because for most automated release tools we need the git tags to compute the next version. In the upper right corner of the file view, to open the workflow editor, click. Actions with the badge indicate GitHub has verified the creator of the action as a partner organization. ![]() Where fetching the cache just times out entirely (with no alerting In the upper right corner of the file view, to open the workflow editor, click. That ran on a docs-only change (so should've had a maximally populatedĬache), which took 20m, 7m of which was the build step, 2m of which wasįetching the cache, and 1m of which was saving the cache. 79% of compiler calls were cacheable and of those 99% With a now populated cache took 14m total, 6.5m of which was in theīuild step. This includes writing the remote cache (minor overhead). With an empty cache took 28m total, 15.5m of which was in the build Ran on this PR (hacked the workflow a bit). Postsubmit anyway) and don't need artifact storage (which we'd need on Luckily, for these builds we're generally doingĮverything in one job and just want caching (which we only write on Self-hosted runners except that we have to do GCS auth through serviceĪccount keys, unfortunately, which means that access is restricted to We have found the GitHub actions built-in caching mechanism to beĮxtremely limiting: slow, small, and buggy. Re: #1186 discovered that the checkout action could stall for a * Improve checkout performance on Windows runners by upgrading dependency We have a very simply workflow which doesn't work: - uses: actions/checkoutv2 with: lfs: 'true' All git lfs request will be rej. How is it even possible that a sleep for 15 seconds takes almost double the time? This was done with a simple run: Start-Sleep -Seconds 15. We have found an issue regarding GitHub checkout action v2, Git LFS and GitHub Enterprise Server (on Azure). The simple sleep-task on regular runner uses twice the time of the sleep interval. See the latest workflow runs and caches of actions/checkout, a GitHub action for checking out a repository. The post checkout (cleanup) is on average 15 times slower as on regular runner, and all the time also goes before any cleanup is started: Findings: Finding 1:Įvery single checkout on the large runners is at least twice as slow as on regular runner, and all the times goes before the actual checkout starts: ![]() ℹ️ The last row is median and not average, so that any single slow run should interfere with the result.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |