Adds a feature [like GitHub has](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork) (step 7).
If you create a new PR from a forked repo, you can select (and change later, but only if you are the PR creator/poster) the "Allow edits from maintainers" option.
Then users with write access to the base branch get more permissions on this branch:
* use the update pull request button
* push directly from the command line (`git push`)
* edit/delete/upload files via web UI
* use related API endpoints
You can't merge PRs to this branch with this enabled, you'll need "full" code write permissions.
This feature has a pretty big impact on the permission system. I might forgot changing some things or didn't find security vulnerabilities. In this case, please leave a review or comment on this PR.
Closes#17728
Co-authored-by: 6543 <6543@obermui.de>
Storing the foreign identifier of an imported issue in the database is a prerequisite to implement idempotent migrations or mirror for issues. It is a baby step towards mirroring that introduces a new table.
At the moment when an issue is created by the Gitea uploader, it fails if the issue already exists. The Gitea uploader could be modified so that, instead of failing, it looks up the database to find an existing issue. And if it does it would update the issue instead of creating a new one. However this is not currently possible because an information is missing from the database: the foreign identifier that uniquely represents the issue being migrated is not persisted. With this change, the foreign identifier is stored in the database and the Gitea uploader will then be able to run a query to figure out if a given issue being imported already exists.
The implementation of mirroring for issues, pull requests, releases, etc. can be done in three steps:
1. Store an identifier for the element being mirrored (issue, pull request...) in the database (this is the purpose of these changes)
2. Modify the Gitea uploader to be able to update an existing repository with all it contains (issues, pull request...) instead of failing if it exists
3. Optimize the Gitea uploader to speed up the updates, when possible.
The second step creates code that does not yet exist to enable idempotent migrations with the Gitea uploader. When a migration is done for the first time, the behavior is not changed. But when a migration is done for a repository that already exists, this new code is used to update it.
The third step can use the code created in the second step to optimize and speed up migrations. For instance, when a migration is resumed, an issue that has an update time that is not more recent can be skipped and only newly created issues or updated ones will be updated. Another example of optimization could be that a webhook notifies Gitea when an issue is updated. The code triggered by the webhook would download only this issue and call the code created in the second step to update the issue, as if it was in the process of an idempotent migration.
The ForeignReferences table is added to contain local and foreign ID pairs relative to a given repository. It can later be used for pull requests and other artifacts that can be mirrored. Although the foreign id could be added as a single field in issues or pull requests, it would need to be added to all tables that represent something that can be mirrored. Creating a new table makes for a simpler and more generic design. The drawback is that it requires an extra lookup to obtain the information. However, this extra information is only required during migration or mirroring and does not impact the way Gitea currently works.
The foreign identifier of an issue or pull request is similar to the identifier of an external user, which is stored in reactions, issues, etc. as OriginalPosterID and so on. The representation of a user is however different and the ability of users to link their account to an external user at a later time is also a logic that is different from what is involved in mirroring or migrations. For these reasons, despite some commonalities, it is unclear at this time how the two tables (foreign reference and external user) could be merged together.
The ForeignID field is extracted from the issue migration context so that it can be dumped in files with dump-repo and later restored via restore-repo.
The GetAllComments downloader method is introduced to simplify the implementation and not overload the Context for the purpose of pagination. It also clarifies in which context the comments are paginated and in which context they are not.
The Context interface is no longer useful for the purpose of retrieving the LocalID and ForeignID since they are now both available from the PullRequest and Issue struct. The Reviewable and Commentable interfaces replace and serve the same purpose.
The Context data member of PullRequest and Issue becomes a DownloaderContext to clarify that its purpose is not to support in memory operations while the current downloader is acting but is not otherwise persisted. It is, for instance, used by the GitLab downloader to store the IsMergeRequest boolean and sort out issues.
---
[source](https://lab.forgefriends.org/forgefriends/forgefriends/-/merge_requests/36)
Signed-off-by: Loïc Dachary <loic@dachary.org>
Co-authored-by: Loïc Dachary <loic@dachary.org>
* Fix page and missing return on unadopted repos API
Page must be 1 if it's not specified and it should return after sending an internal server error.
* Allow ignore pages
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
v208.go is seriously broken as it misses an ID() check. We need to no-op and remigrate all of the u2f keys.
See #18756
Signed-off-by: Andrew Thornton <art27@cantab.net>
Unfortunately credentialIDs in u2f are 255 bytes long which with base32 encoding
becomes 408 bytes. The default size of a xorm string field is only a VARCHAR(255)
This problem is not apparent on SQLite because strings get mapped to TEXT there.
Fix#18727
Signed-off-by: Andrew Thornton <art27@cantab.net>
This contains some additional fixes and small nits related to #17957
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Migrate from U2F to Webauthn
Co-authored-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
* Team permission allow different unit has different permission
* Finish the interface and the logic
* Fix lint
* Fix translation
* align center for table cell content
* Fix fixture
* merge
* Fix test
* Add deprecated
* Improve code
* Add tooltip
* Fix swagger
* Fix newline
* Fix tests
* Fix tests
* Fix test
* Fix test
* Max permission of external wiki and issues should be read
* Move team units with limited max level below units table
* Update label and column names
* Some improvements
* Fix lint
* Some improvements
* Fix template variables
* Add permission docs
* improve doc
* Fix fixture
* Fix bug
* Fix some bug
* fix
* gofumpt
* Integration test for migration (#18124)
integrations: basic test for Gitea {dump,restore}-repo
This is a first step for integration testing of DumpRepository and
RestoreRepository. It:
runs a Gitea server,
dumps a repo via DumpRepository to the filesystem,
restores the repo via RestoreRepository from the filesystem,
dumps the restored repository to the filesystem,
compares the first and second dump and expects them to be identical
The verification is trivial and the goal is to add more tests for each
topic of the dump.
Signed-off-by: Loïc Dachary <loic@dachary.org>
* Team permission allow different unit has different permission
* Finish the interface and the logic
* Fix lint
* Fix translation
* align center for table cell content
* Fix fixture
* merge
* Fix test
* Add deprecated
* Improve code
* Add tooltip
* Fix swagger
* Fix newline
* Fix tests
* Fix tests
* Fix test
* Fix test
* Max permission of external wiki and issues should be read
* Move team units with limited max level below units table
* Update label and column names
* Some improvements
* Fix lint
* Some improvements
* Fix template variables
* Add permission docs
* improve doc
* Fix fixture
* Fix bug
* Fix some bug
* Fix bug
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Aravinth Manivannan <realaravinth@batsense.net>
- The current implementation of `RandomString` doesn't give you a most-possible unique randomness. It gives you 6*`length` instead of the possible 8*`length` bits(or as `length`x bytes) randomness. This is because `RandomString` is being limited to a max value of 63, this in order to represent the random byte as a letter/digit.
- The recommendation of pbkdf2 is to use 64+ bit salt, which the `RandomString` doesn't give with a length of 10, instead of increasing 10 to a higher number, this patch adds a new function called `RandomBytes` which does give you the guarentee of 8*`length` randomness and thus corresponding of `length`x bytes randomness.
- Use hexadecimal to store the bytes value in the database, as mentioned, it doesn't play nice in order to convert it to a string. This will always be a length of 32(with `length` being 16).
- When we detect on `Authenticate`(source: db) that a user has the old format of salt, re-hash the password such that the user will have it's password hashed with increased salt.
Thanks to @zeripath for working out the rouge edges from my first commit 😄.
Co-authored-by: lafriks <lauris@nix.lv>
Co-authored-by: zeripath <art27@cantab.net>
* Add support for ssh commit signing
* Split out ssh verification to separate file
* Show ssh key fingerprint on commit page
* Update sshsig lib
* Make sure we verify against correct namespace
* Add ssh public key verification via ssh signatures
When adding a public ssh key also validate that this user actually
owns the key by signing a token with the private key.
* Remove some gpg references and make verify key optional
* Fix spaces indentation
* Update options/locale/locale_en-US.ini
Co-authored-by: Gusted <williamzijl7@hotmail.com>
* Update templates/user/settings/keys_ssh.tmpl
Co-authored-by: Gusted <williamzijl7@hotmail.com>
* Update options/locale/locale_en-US.ini
Co-authored-by: Gusted <williamzijl7@hotmail.com>
* Update options/locale/locale_en-US.ini
Co-authored-by: Gusted <williamzijl7@hotmail.com>
* Update models/ssh_key_commit_verification.go
Co-authored-by: Gusted <williamzijl7@hotmail.com>
* Reword ssh/gpg_key_success message
* Change Badsignature to NoKeyFound
* Add sign/verify tests
* Fix upstream api changes to user_model User
* Match exact on SSH signature
* Fix code review remarks
Co-authored-by: Gusted <williamzijl7@hotmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
We have the `AppState` module now, it can store app related data easily. We do not need to create separate tables for each feature.
So the update checker can use `AppState` instead of a one-row dedicate table.
And the code of update checker is moved from `models` to `modules`.
Gitea writes its own AppPath into git hook scripts. If Gitea's AppPath changes, then the git push will fail.
This PR:
* Introduce an AppState module, it can persist app states into database
* During GlobalInit, Gitea will check if the current AppPath is the same as last one. If they don't match, Gitea will sync git hooks.
* Refactor some code to make them more clear.
* Also, "Detect if gitea binary's name changed" #11341 is related, we call models.RewriteAllPublicKeys to update ssh authorized_keys file
* issue content history
* Use timeutil.TimeStampNow() for content history time instead of issue/comment.UpdatedUnix (which are not updated in time)
* i18n for frontend
* refactor
* clean up
* fix refactor
* re-format
* temp refactor
* follow db refactor
* rename IssueContentHistory to ContentHistory, remove empty model tags
* fix html
* use avatar refactor to generate avatar url
* add unit test, keep at most 20 history revisions.
* re-format
* syntax nit
* Add issue content history table
* Update models/migrations/v197.go
Co-authored-by: 6543 <6543@obermui.de>
* fix merge
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Lauris BH <lauris@nix.lv>
- Update default branch if needed
- Update protected branch if needed
- Update all not merged pull request base branch name
- Rename git branch
- Record this rename work and auto redirect for old branch on ui
Signed-off-by: a1012112796 <1012112796@qq.com>
Co-authored-by: delvh <dev.lh@web.de>
* Fix commit status index problem
* remove unused functions
* Add fixture and test for migration
* Fix lint
* Fix fixture
* Fix lint
* Fix test
* Fix bug
* Fix bug
Fixes#16381
Note that changes to unprotected files via the web editor still cannot be pushed directly to the protected branch. I could easily add such support for edits and deletes if needed. But for adding, uploading or renaming unprotected files, it is not trivial.
* Extract & Move GetAffectedFiles to modules/git
When create a new issue or comment and paste/upload an attachment/image, it will not assign an issue id before submit. So if user give up the creating, the attachments will lost key feature and become dirty content. We don't know if we need to delete the attachment even if the repository deleted.
This PR add a repo_id in attachment table so that even if a new upload attachment with no issue_id or release_id but should have repo_id. When deleting a repository, they could also be deleted.
Co-authored-by: 6543 <6543@obermui.de>
Make the group_id a primary key in issue_index. This already has an unique index
and therefore is a good candidate for becoming a primary key.
This PR also changes all other uses of this table to add the group_id as the
primary key.
Fix#16802
Signed-off-by: Andrew Thornton <art27@cantab.net>
The MySQL indexes are not being renamed at the same time as RENAME table despite the
CASCADE. Therefore it is probably better to just recreate the indexes instead.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
* Restore compatibility with SQLServer 2008 R2 in migrations
`ALTER TABLE DROP ... IF EXISTS ...` is only supported in SQL Server >16.
The `IF EXISTS` here is a belt-and-braces and does not need to be present. Therefore
can be dropped.
We need to figure out some way of restricting our SQL syntax against the minimum
version of SQL Server we will support.
My suspicion is that `ALTER DATABASE database_name SET COMPATIBILITY_LEVEL = 100` may
do that but there may be other side-effects so I am not whether to do that.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* try just dropping the index only
Signed-off-by: Andrew Thornton <art27@cantab.net>
* use lowercase for system tables
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
`models` does far too much. In particular it handles all `UserSignin`.
It shouldn't be responsible for calling LDAP, SMTP or PAM for signing in.
Therefore we should move this code out of `models`.
This code has to depend on `models` - therefore it belongs in `services`.
There is a package in `services` called `auth` and clearly this functionality belongs in there.
Plan:
- [x] Change `auth.Auth` to `auth.Method` - as they represent methods of authentication.
- [x] Move `models.UserSignIn` into `auth`
- [x] Move `models.ExternalUserLogin`
- [x] Move most of the `LoginVia*` methods to `auth` or subpackages
- [x] Move Resynchronize functionality to `auth`
- Involved some restructuring of `models/ssh_key.go` to reduce the size of this massive file and simplify its files.
- [x] Move the rest of the LDAP functionality in to the ldap subpackage
- [x] Re-factor the login sources to express an interfaces `auth.Source`?
- I've done this through some smaller interfaces Authenticator and Synchronizable - which would allow us to extend things in future
- [x] Now LDAP is out of models - need to think about modules/auth/ldap and I think all of that functionality might just be moveable
- [x] Similarly a lot Oauth2 functionality need not be in models too and should be moved to services/auth/source/oauth2
- [x] modules/auth/oauth2/oauth2.go uses xorm... This is naughty - probably need to move this into models.
- [x] models/oauth2.go - mostly should be in modules/auth/oauth2 or services/auth/source/oauth2
- [x] More simplifications of login_source.go may need to be done
- Allow wiring in of notify registration - *this can now easily be done - but I think we should do it in another PR* - see #16178
- More refactors...?
- OpenID should probably become an auth Method but I think that can be left for another PR
- Methods should also probably be cleaned up - again another PR I think.
- SSPI still needs more refactors.* Rename auth.Auth auth.Method
* Restructure ssh_key.go
- move functions from models/user.go that relate to ssh_key to ssh_key
- split ssh_key.go to try create clearer function domains for allow for
future refactors here.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Add option to provide signed token to verify key ownership
Currently we will only allow a key to be matched to a user if it matches
an activated email address. This PR provides a different mechanism - if
the user provides a signature for automatically generated token (based
on the timestamp, user creation time, user ID, username and primary
email.
* Ensure verified keys can act for all active emails for the user
* Add code to mark keys as verified
* Slight UI adjustments
* Slight UI adjustments 2
* Simplify signature verification slightly
* fix postgres test
* add api routes
* handle swapped primary-keys
* Verify the no-reply address for verified keys
* Only add email addresses that are activated to keys
* Fix committer shortcut properly
* Restructure gpg_keys.go
* Use common Verification Token code
Signed-off-by: Andrew Thornton <art27@cantab.net>
This PR removes multiple unneeded fields from the `HookTask` struct and adds the two headers `X-Hub-Signature` and `X-Hub-Signature-256`.
## ⚠️ BREAKING ⚠️
* The `Secret` field is no longer passed as part of the payload.
* "Breaking" change (or fix?): The webhook history shows the real called url and not the url registered in the webhook (`deliver.go`@129).
Close#16115Fixes#7788Fixes#11755
Co-authored-by: zeripath <art27@cantab.net>
* Add migrating message
Signed-off-by: Andrew Thornton <art27@cantab.net>
* simplify messenger
Signed-off-by: Andrew Thornton <art27@cantab.net>
* make messenger an interface
Signed-off-by: Andrew Thornton <art27@cantab.net>
* rename
Signed-off-by: Andrew Thornton <art27@cantab.net>
* prepare for merge
Signed-off-by: Andrew Thornton <art27@cantab.net>
* as per tech
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
* Add a new table issue_index to store the max issue index so that issue could be deleted with no duplicated index
* Fix pull index
* Add tests for concurrent creating issues
* Fix lint
* Fix tests
* Fix postgres test
* Add test for migration v180
* Rename wrong test file name
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Lauris BH <lauris@nix.lv>
* Always store primary email address into email_address table and also the state
* Add lower_email to not convert email to lower as what's added
* Fix fixture
* Fix tests
* Use BeforeInsert to save lower email
* Fix v180 migration
* fix tests
* Fix test
* Remove wrong submited codes
* Fix test
* Fix test
* Fix test
* Add test for v181 migration
* remove change user's email to lower
* Revert change on user's email column
* Fix lower email
* Fix test
* Fix test
* encrypt migration credentials in task persistence
Not sure this is the best approach, we could encrypt the entire
`PayloadContent` instead. Also instead of clearing individual fields in
payload content, we could just delete the task once it has
(successfully) finished..?
* remove credentials of past migrations
* only run DB migration for completed tasks
* fix binding
* add omitempty
* never serialize unencrypted credentials
* fix import order
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* Refactored handleOAuth2SignIn in routers/user/auth.go
The function handleOAuth2SignIn was called twice but some code path could only
be reached by one of the invocations. Moved the unnecessary code path out of
handleOAuth2SignIn.
* Refactored user creation
There was common code to create a user and display the correct error message.
And after the creation the only user should be an admin and if enabled a
confirmation email should be sent. This common code is now abstracted into
two functions and a helper function to call both.
* Added auto-register for OAuth2 users
If enabled new OAuth2 users will be registered with their OAuth2 details.
The UserID, Name and Email fields from the gothUser are used.
Therefore the OpenID Connect provider needs additional scopes to return
the coresponding claims.
* Added error for missing fields in OAuth2 response
* Linking and auto linking on oauth2 registration
* Set default username source to nickname
* Add automatic oauth2 scopes for github and google
* Add hint to change the openid connect scopes if fields are missing
* Extend info about auto linking security risk
Co-authored-by: Viktor Kuzmin <kvaster@gmail.com>
Signed-off-by: Martin Michaelis <code@mgjm.de>
* Implemented LFS client.
* Implemented scanning for pointer files.
* Implemented downloading of lfs files.
* Moved model-dependent code into services.
* Removed models dependency. Added TryReadPointerFromBuffer.
* Migrated code from service to module.
* Centralised storage creation.
* Removed dependency from models.
* Moved ContentStore into modules.
* Share structs between server and client.
* Moved method to services.
* Implemented lfs download on clone.
* Implemented LFS sync on clone and mirror update.
* Added form fields.
* Updated templates.
* Fixed condition.
* Use alternate endpoint.
* Added missing methods.
* Fixed typo and make linter happy.
* Detached pointer parser from gogit dependency.
* Fixed TestGetLFSRange test.
* Added context to support cancellation.
* Use ReadFull to probably read more data.
* Removed duplicated code from models.
* Moved scan implementation into pointer_scanner_nogogit.
* Changed method name.
* Added comments.
* Added more/specific log/error messages.
* Embedded lfs.Pointer into models.LFSMetaObject.
* Moved code from models to module.
* Moved code from models to module.
* Moved code from models to module.
* Reduced pointer usage.
* Embedded type.
* Use promoted fields.
* Fixed unexpected eof.
* Added unit tests.
* Implemented migration of local file paths.
* Show an error on invalid LFS endpoints.
* Hide settings if not used.
* Added LFS info to mirror struct.
* Fixed comment.
* Check LFS endpoint.
* Manage LFS settings from mirror page.
* Fixed selector.
* Adjusted selector.
* Added more tests.
* Added local filesystem migration test.
* Fixed typo.
* Reset settings.
* Added special windows path handling.
* Added unit test for HTTPClient.
* Added unit test for BasicTransferAdapter.
* Moved into util package.
* Test if LFS endpoint is allowed.
* Added support for git://
* Just use a static placeholder as the displayed url may be invalid.
* Reverted to original code.
* Added "Advanced Settings".
* Updated wording.
* Added discovery info link.
* Implemented suggestion.
* Fixed missing format parameter.
* Added Pointer.IsValid().
* Always remove model on error.
* Added suggestions.
* Use channel instead of array.
* Update routers/repo/migrate.go
* fmt
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: zeripath <art27@cantab.net>
* Create Proper Migration tests
Unfortunately our testing regime has so far meant that migrations do not
get proper testing.
This PR begins the process of creating migration tests for this.
* Add test for v176
* fix mssql drop db
Signed-off-by: Andrew Thornton <art27@cantab.net>