Alternate between generating prompts via Github issues and parsing LLM
responses until the user exits.
Also slightly modify LLM response parsing to handle some unintentional
text which is copied from ChatGPT's web UI for code blocks.
Implement cobra and stuff
Right now there is just one root command
You can set a config file, or pass flags in via CLI
Haven't figured out how to use env variables yet
But if you run the root command,
It'll check the repo for issues, and generate an LLM prompt
You can copy-paste it to the LLM chat, then copy-paste the response to a
different file
Then press enter in the CLI tool, and it will parse the response and
open a PR with the change
integrate the llm and versioncontrol packages in the pullpal package
* Functionality to select a Github issue, and generate an LLM prompt
from it (PickIssue)
* Functionality to parse LLM response (from string or file), update
local git repository, and create pull request (ProcessResponse and
ProcessResponseFromFile)
Added "version control client" interface and created a Github
implementation of it. Right now it only creates pull requests and lists
issues.
Restructued some code. Idk if it will be permanent.
Next I plan to add behavior around replacing files in the local repo,
and creating commits.