This is very cool and I could see it being really useful especially for those giant PRs. I'd prefer it if instead of the slider I could just click the different heatmap colors and if they indicated what exactly they were for (label not threshold). I get the underlying premise but at a glance it's more to process unless I was to end up using this constantly.
Currently tooltips are shown when hovering on highlighted words. Need to make it visible on mobile though. Was wondering if you were thinking of another way to show the labels besides hovering?
This is something I have found missing in my current workflow when reviewing PR's. Particularly in the age of large AI generated PR's.
I think most reviewers do this to some degree by looking at points of interest. It'd be cool if this could look at your prior reviews and try to learn your style.
Thank you. This is a pretty cool feature that is just scratching the surface of a deep need, so keep at it.
Another perspective where this exact feature would be useful is in security review.
For example - there are many static security analyzers that look for patterns, and they're useful when you break a clearly predefined rule that is well known.
However, there are situations that static tools miss, but a highlight tool like this could help bring a reviewer's eyes to a high risk "area". I.e. scrutinize this code more because it deals with user input information and there is the chance of SQL injection here, etc.
This is really useful. Might want to add a checkbox at a certain threshold, so that reviewers explicitly answer the concerns of the LLM. Also you can start collecting stats on how "easy to review" PR's of team members are, e.g. they'd probably get a better score if they address the concerns in the comments already.
File `apps/client/electron/main/proxy-routing.ts` line 63
Adding a comment to explain why the downgrade is done would have resulted in not raising the issue?
Also two suggestions on the UI
- anchors on lines
- anchors on files and ability to copy a filename easily
I think most reviewers do this to some degree by looking at points of interest. It'd be cool if this could look at your prior reviews and try to learn your style.
Is this the correct commit to look at? https://github.com/manaflow-ai/cmux/commit/661ea617d7b1fd392...
This file has most of the logic, the commit you linked to has a bunch of other experiments.
> look at your prior reviews and try to learn your style.
We're really interested in this direction too of maybe setting up a DSPy system to automatically fit reviews to your preferences
Another perspective where this exact feature would be useful is in security review.
For example - there are many static security analyzers that look for patterns, and they're useful when you break a clearly predefined rule that is well known.
However, there are situations that static tools miss, but a highlight tool like this could help bring a reviewer's eyes to a high risk "area". I.e. scrutinize this code more because it deals with user input information and there is the chance of SQL injection here, etc.
I think that would be very useful as well.