Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running gitea across multiple servers #2959

Closed
svarlamov opened this issue Nov 22, 2017 · 34 comments
Closed

Running gitea across multiple servers #2959

svarlamov opened this issue Nov 22, 2017 · 34 comments
Labels
type/proposal The new feature has not been accepted yet but needs to be discussed first.

Comments

@svarlamov
Copy link
Contributor

Been working with Gitea for a few days now, and I find the project quite intriguing! Now, I'm wondering what it would take to be able to run gitea on a multi-host setup, likely behind a single nginx load balancer... I think that it would be very useful to enable people to host across servers, first and foremost because of storage limitations to a single machine, as well as setups where we don't want to actually keep the files locally, but we want to hold them on S3 or a similar service... In addition, with larger teams that are online 24/7, availability and CPU begins to become a bottleneck...

Overall, it looks like Gitea's architecture would not be too challenging to distribute given it already has a central DB and stateless HTTP APIs (please correct me here if I'm wrong...), however, it seems to have two key things holding it back right now:

  1. No distributed task/messaging/scheduling queue
  2. No way to store repos beyond the local FS

I am excited to hear the maintainers' thoughts on this, perhaps how it fits into the roadmap for Gitea, and what sort of work this would involve. 😄

@lafriks lafriks added this to the 2.x.x milestone Nov 22, 2017
@lafriks lafriks added the type/proposal The new feature has not been accepted yet but needs to be discussed first. label Nov 22, 2017
@lunny
Copy link
Member

lunny commented Nov 22, 2017

@svarlamov Your thoughts are our long-term objective. For sqlite version, it's only suitable for single node. For other databases, you can connect to a database proxy for distribute deployment. And also for reverse proxy load balance and shared session(via Macaron redis session) are both ready.

As you mentioned above, there are two enhancements need to do. I think the second is more difficult than the first one. As I know there are no out-of-box go git library which support S3 or other cloud storage system. So we have to implement it ourselves in https://github.com/go-gitea/git. I have sent a PR go-gitea/git#92 to begin the work.

In fact, we also need some work on split SSH server or hooks with databases. I have added an issue #1023 and serval PRs to do that but not finished. After that, the SSH server could be divided with HTTP server.

If somebody wants to provide public git services based on Gitea, all these things maybe considered.

@kolaente
Copy link
Member

kolaente commented Nov 22, 2017

And what about just hosting a bunch of dockercontainers with a shared file system over multiple hosts?

@lunny
Copy link
Member

lunny commented Nov 23, 2017

@kolaente but you have to mount the shared file system every container

@svarlamov
Copy link
Contributor Author

@lunny Sounds interesting... So the SSH stuff basically needs to get 'sharded' for lack of a better worded? And what about 'locking' repos so that people don't step over each other?

@svarlamov
Copy link
Contributor Author

@kolaente That's definitely a possibility, however, I'm wondering if that will still have issues working out of the box as you will still need a distributed queue and some sort of locking method I believe... Also, from a scalability perspective, you will likely need to shard those volumes sooner or later anyways

@svarlamov
Copy link
Contributor Author

@lunny After doing some more research on this topic, the following items have come to my attention:

  1. Using a networked FS (ie., through sharing vols on Docker or an NFS impl) raises quite a few issues, but most crucially, latency... Given git's object store architecture, this is basically impossible.

  2. libgit2 (with additional go bindings) could technically be used to store files on a DB (or elsewhere) through implementing the backend interface (https://www.perforce.com/blog/your-git-repository-database-pluggable-backends-libgit2), but this, again for latency reasons, is unlikely to be usable

  3. gitea's use of the go-gitea/git wrapper makes using libgit2 problematic...

  4. Access through the ssh method requires that files sit on the filesystem (or an equivalent like a FUSE implementation), otherwise the standard git tooling will not work (for obvious reasons)

  5. Further to 1, 2, 3, and 4: using anything except for a filesystem (or equivalent setup as far as latency is concerned) is really the only way to serve git. Methods such as NFS, S3, etc., can only be used for backups - not 'hot' storage

My research also highlighted the Github eng team's blog, where they discuss similar issues and echo many of the same concerns:

Their conclusion can be most simply described as 'sharding', where the repositories ultimately sit on native filesystems (as Gitea currently stores them) and requests are then proxied to the right servers through a (in their case dynamic) system that knows where each repo is. In their design, the repos are actually stored across 3 replicas for high availability, which is pretty useful, especially if you end up running all of this on commodity hardware...

This sort of sharding logic seems to fit pretty naturally into Gitea and is probably the most feasible solution that could be implemented once Gitea's other services are ported to multi-host.

Separately, LFS would also need to be ported. Based on my current understanding, it is pretty straightforward to migrate the LFS store off of the filesystem and it could either be done through the same sharding method or through a more generic driver-based system where gitea itself doesn't actually bother with the details of how the objects are stored...

I would also like to raise a new topic that is relevant to multi-host deploys, and that is multi-host content search! Looks like this would need to use a 3rd party indexing/search platform such as elastic search...

And as far as the 'locking' issue is concerned, it seems that, as I mention in point i below, that is managed by a neat custom mutex wrapper that controls these 'pools' -- this would definitely have to be re-written for multi-host support and should rely on an existing distributed tool

I'm keen to hear everyone's thoughts on this :)

P.S. Below I've also pasted some notes (they don't aim to be exhaustive by any means) of modules that look to be stateful and would need to be modified to support multi-host deployments...

a. https://github.com/go-gitea/gitea/blob/master/modules/cron/cron.go <- This seems to assume a single-host setup. For multi-host, this sort of stuff is not usually done within the primary application server (except for internal stuff, such as memory-related jobs...). Perhaps this can be optionally turned off and handled by a separate server? Or moved to a generic service? There are lots of wonderful open source options here

b. https://github.com/go-gitea/gitea/blob/master/modules/lfs/content_store.go <- This would need to point to the storage method for LFS, or potentially an LFS driver?

c. https://github.com/go-gitea/gitea/blob/master/modules/ssh/ssh.go <- Assumes single-host setup

d. https://github.com/go-gitea/gitea/blob/master/modules/cache/cache.go <- Looks like this is good to go with built-in support for 3rd party caching systems configured in the options!

e. https://github.com/go-gitea/gitea/blob/master/modules/indexer/repo.go <- Indexing relies on the server's FS...

f. https://github.com/go-gitea/gitea/blob/master/modules/mailer/mailer.go <- Likely needs to use a distributed task queuing service, and should probably support retries beyond a few seconds... There are some good tools for the job here that are widely supported

g. https://github.com/go-gitea/gitea/tree/master/modules/markup <- Looks like this is bound to the local filesystem

h. https://github.com/go-gitea/gitea/blob/master/modules/notification/notification.go <- Seems to be ready to use memcache, so should be fine

i. https://github.com/go-gitea/gitea/blob/master/modules/sync/exclusive_pool.go <- This seems to perform the 'locking' functionality that we've been discussing... This would essentially have to get moved up to a higher-level distributed service that would actually manage the state, with this file becoming a wrapper?

@lunny
Copy link
Member

lunny commented Nov 23, 2017

@svarlamov we need a FileSystem or Storage abstract layer.

@svarlamov
Copy link
Contributor Author

@lunny How would that fit with a sharding approach? I'm thinking we would be reading from the local FS in the end, but it's just a matter of how to get the request onto the right machine

@kolaente
Copy link
Member

kolaente commented Nov 23, 2017

I've quickly tested a multicontainer with shared volume approach (in rancher). It seems it cannot start the webservice when running with multiple instances, I'll do some tests later today.

@svarlamov Rancher can abstract the FS for you to AWS for example. For the database you could do something with a galera instance.

@svarlamov
Copy link
Contributor Author

@kolaente Interesting... I will play around with it as well... In regards to the shared volume topic, please have a look at my comments above, namely the discussion of latency, particularly when coupled with Git's object-store architecture. This is what is discussed in Github's engineering blog as well, and they outline why you can't effectively run a git service on top of that sort of a shared FS architecture

@kolaente
Copy link
Member

kolaente commented Nov 26, 2017

So, after some testing I found the following: when gitea shares the data volume, it cannot start more than one container. But if everything except the gitea/indexers folder (what is this for?) is shared, it works out of the box! With that said, I don't think it is needed to build distributed storage into gitea, it probably is easier to let something like rancher or portworx handle the storage aspect.

But as soon as you start sending forms (for example when creating an issue), you get crfs token errors all over the place... There is some work to do.

I also created a PR to add Gitea to the rancher community catalog: rancher/community-catalog#680

@lafriks
Copy link
Member

lafriks commented Nov 26, 2017

Did you try changing caching from memory to memcached? Also session storage should be changed from file to something else

@dapperfu
Copy link

Some benchmarks and CPU numbers putting Gitea behind nginx.

This is tested on my FreeNAS machine with Gitea & nginx running in a jail.

$ sysctl hw.model hw.ncpu
hw.model: Intel(R) Atom(TM) CPU  C2758  @ 2.40GHz
hw.ncpu: 8

Using bombardier on another machine to drive HTTP load.

Gitea configured to use mysql and memcached. Nginx configured with 8 worker processes.

Baseline idle:
last pid: 88584; load averages: 0.69, 0.78, 0.79 up 8+15:11:08 17:08:20
31 processes: 1 running, 30 sleeping
CPU: 12.3% user, 0.0% nice, 1.2% system, 1.6% interrupt, 84.9% idle
Mem: 417M Active, 85M Inact, 46M Laundry, 14G Wired, 995M Free
ARC: 8561M Total, 5333M MFU, 1059M MRU, 2836K Anon, 146M Header, 2019M Other
5371M Compressed, 6498M Uncompressed, 1.21:1 Ratio
Swap: 6144M Total, 2057M Used, 4087M Free, 33% Inuse

PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
64132 mysql        49  52    0  1970M  5980K select  1   8:49   0.06% mysqld
57145 nobody       10  20    0 37408K  1388K kqread  2   0:17   0.01% memcached
66858 git          39  21    0   118M 21240K uwait   0  27:42   0.00% gitea
69043 www           1  52    0 35604K  1144K kqread  4   6:22   0.00% nginx
69044 www           1  20    0 35604K  1152K kqread  6   6:17   0.00% nginx
69042 www           1  52    0 35604K  1124K kqread  6   6:15   0.00% nginx
69045 www           1  20    0 35604K  1152K kqread  0   6:12   0.00% nginx
69046 www           1  52    0 35604K  1112K kqread  5   6:06   0.00% nginx
69047 www           1  52    0 35604K  1132K kqread  0   5:56   0.00% nginx
69048 www           1  20    0 35604K  1744K kqread  4   5:16   0.00% nginx
69041 www           1  20    0 37652K  1176K kqread  0   5:11   0.00% nginx

Connecting to gitea directly with 200 connections:

last pid: 89104;  load averages: 11.48,  4.97,  2.46                                                                                                                                                                 up 8+15:13:41  17:10:53
31 processes:  2 running, 29 sleeping
CPU: 67.4% user,  0.0% nice, 22.9% system,  5.1% interrupt,  4.6% idle
Mem: 117M Active, 343M Inact, 637M Laundry, 14G Wired, 173M Free
ARC: 8704M Total, 5249M MFU, 1197M MRU, 56M Anon, 147M Header, 2054M Other
     5404M Compressed, 6704M Uncompressed, 1.24:1 Ratio
Swap: 6144M Total, 2012M Used, 4132M Free, 32% Inuse

  PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
66858 git          43  80    0   496M   439M RUN     7  32:04 495.09% gitea
64132 mysql        65  52    0  1990M   188M select  2  10:06 161.79% mysqld
57145 nobody       10  20    0 37408K  1388K kqread  2   0:17   0.02% memcached

Latency results:

Bombarding http://172.16.3.104:3000/explore/repos with 10000 requests using 200 connections
 10000 / 10000 [=======================================================================================] 100.00% 18s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec       544.74     499.01    6392.01
  Reqs/sec     365.27ms    74.52ms      1.08s
  Latency Distribution
     50%   350.44ms
     75%   444.55ms
     90%   544.68ms
     99%   739.92ms
  HTTP codes:
    1xx - 0, 2xx - 9810, 3xx - 0, 4xx - 0, 5xx - 190
    others - 0
  Throughput:     4.04MB/s

Connecting using nginx as a reverse proxy:

last pid: 90424;  load averages:  2.56,  5.77,  3.85                                                                                                                                                                 up 8+15:18:32  17:15:44
31 processes:  3 running, 28 sleeping
CPU:  5.4% user,  0.0% nice, 26.8% system, 16.1% interrupt, 51.8% idle
Mem: 108M Active, 482M Inact, 517M Laundry, 14G Wired, 397M Free
ARC: 8485M Total, 5114M MFU, 1286M MRU, 5699K Anon, 148M Header, 1933M Other
     5408M Compressed, 6795M Uncompressed, 1.26:1 Ratio
Swap: 6144M Total, 2011M Used, 4133M Free, 32% Inuse
 
  PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
69043 www           1  48    0 35604K  1452K kqread  3   6:28  44.93% nginx
69046 www           1  47    0 35604K  1424K kqread  3   6:12  40.29% nginx
69044 www           1  47    0 35604K  1456K CPU5    5   6:22  36.64% nginx
69048 www           1  45    0 35604K  2112K kqread  4   5:21  35.97% nginx
69045 www           1  43    0 35604K  1456K kqread  2   6:17  35.21% nginx
69047 www           1  47    0 35604K  1412K kqread  5   6:01  31.80% nginx
69042 www           1  42    0 35604K  1432K kqread  7   6:21  29.14% nginx
69041 www           1  32    0 37652K  1420K RUN     1   5:15  27.52% nginx
66858 git          43  20    0   496M   439M uwait   6  40:56   0.00% gitea
64132 mysql        49  52    0  1997M   195M select  1  13:05   0.00% mysqld
57145 nobody       10  20    0 37408K  1388K kqread  6   0:18   0.00% memcached

Latency results:

Bombarding http://172.16.3.104:80/explore/repos with 10000 requests using 200 connections
 10000 / 10000 [========================================================================================] 100.00% 1s
Done!
Statistics        Avg      Stdev        Max
  Reqs/sec     10917.23    1672.91   15680.40
  Reqs/sec      17.78ms     1.22ms    52.06ms
  Latency Distribution
     50%    18.02ms
     75%    18.02ms
     90%    19.02ms
     99%    28.03ms
  HTTP codes:
    1xx - 0, 2xx - 10000, 3xx - 0, 4xx - 0, 5xx - 0
    others - 0
  Throughput:   106.99MB/s

Nginx was able to not just keep up with the load but also served the results in less time. What was interesting was when you started doing a lot more connections.

Testing out 2000 connections caused gitea to choke and start returning 5xx codes while nginx kept up (even if it did slow down a bit, latency 90% latency was around 600 ms).

@kolaente
Copy link
Member

@jed-frey cool, so apparantly it would already help to loadbalance with some nginx?

Have you tried increasing the max_connections in MySQL? I think this is usually 125, so if you'd do more than that, the mysql-server would block new requests so that only a maximum of 125 clients are served at once. I think if it fails with such a reason, the whole request fails with 500 - which could explain why gitea returned 5xx codes when you tried with 2000 connections.

Guess I'll try some thing running it in parallel when I have time during the holidays.

Also, my PR I mentioned earlier got merged - Gitea is now in the "official" Rancher Community Catalog!

@imacks
Copy link

imacks commented Apr 8, 2018

is there any progress on this?

@svarlamov
Copy link
Contributor Author

@imacks Not that I know of. Still seems to be that NFS/sharding is the best way to make this work at the moment

@jhackettpps
Copy link

It could be that an S3 storage engine would work fine if Gitea worked like a write-through cache. Keep a local copy on the server and force push to an S3-git repository every time there's a push to the repository. Keep a write-ahead log so you can account for what has/hasn't been moved into S3 and what is 'cached' on the local host. It might still requires some sharding to prevent mutually exclusive pushes being made, but probably less.

@svarlamov
Copy link
Contributor Author

@jhackettpps I think that the solution you raise is an interesting one, however -- at least in my interpretation of your solution -- it does not really address the issue of a dataset that's too large to practically/economically fit onto disk. So S3 is great if you have a few 100GBs and several servers for HA, but once you have a few dozen TBs then it starts to become less appealing...

My understanding is that the Gitaly solution -- which is basically just sharding and they're trying to cache on the machines to optimize Git itself as well -- is the industry standard. Similar to what GH does as well...

@imacks
Copy link

imacks commented Jun 4, 2018

my current workaround is somewhat like sharding, but more hacky. i run multiple gitea containers, then use a homebrewed load balancer to redirect based on the user account requested. thus each container serves a dedicated number of assigned users only, and a user is always served by the same container. a container itself is failsafed by healthcheck restarts only. the caveat is that users between 2 containers can't see each other's repos, but it doesn't matter for me because most users are doing private repos here.

@dapperfu
Copy link

dapperfu commented Jun 4, 2018

Is there a tool to stress test Git servers?

I've just using HTTP benchmark tools to hammer home pages to see how it loads but a full stress test with cloning and pushing would be helpful.

@jhackettpps
Copy link

jhackettpps commented Jun 5, 2018 via email

@svarlamov
Copy link
Contributor Author

@jhackettpps I think your last post captures two important points -- Gitea's core objective is not for large-scale Git and the folks at GitLab have already invested a lot into their solution, which is already in production actually...

It'd be cool if Gitea could have storage 'drivers' and be able to use filesystem Git or Gitaly for users who need true HA. I haven't dug too far into the Gitaly APIs, but I believe from early on one of their goals was to make it easy to implement as a replacement for local Git within the Gitlab Rails code...

@dapperfu
Copy link

dapperfu commented Jun 6, 2018

I disagree. GitLab is a resource pig. Just one instance is annoying. My script to get it installed on FreeNAS jails was non-trivial.

There's a reason I moved to Gitea. Being able to bring that to enterprise would be an amazing asset.

@stanier
Copy link

stanier commented Jul 27, 2018

@svarlamov I symphasize with @jed-frey, GitLab is so resource hungry it makes Oracle look good. I'm not saying that having production-ready gitea readily available should be a top priority, but it should definitely be proposed and considered as a viable step forward.

On a related note, is there anything that can be done to make gitea cooperate better with S3/-compatible services, besides just throwing s3fs-fuse and symlinks at it? (Not trying to say S3 should be the solution for public/large-org instances, I just really wanna see how badly I can abuse my DO droplets).

@jag3773
Copy link

jag3773 commented Oct 24, 2018

In April of 2017 I setup a cluster of systems using AWS (See this cf script) backed by an EFS share (a managed NFS share). The 3 node cluster made use of a msyql compatible RDS instance and a Redis instance for session data. Each node runs Nginx as a proxy for gitea and there was an AWS load balancer in front of the nodes. The load balancer also managed port 22, so you've got 1 IP for the whole setup. I had configured the load balancer to use sticky sessions as well, so a single user is glued to a single node during their session.

The system worked "just fine" except that the main repo page which loads the commit messages for the files could take a long time to load if there were thousands of commits in the repo. Because of that, we did #3452. Now that 1.5 has that included, I'm looking at testing that AWS cluster again to see if performance is sufficient for production use.

Other than that issue HTTP and git actions operated at acceptable speeds. It's true that they were not as fast as the native filesystem but they were within the bounds of user expectation.

@imacks
Copy link

imacks commented Oct 26, 2018

@jag3773 your setup is similar to mine. i had an incident on aws last week where gitea went mad with disk writes and aws charged me for that. i couldn't figure out what went wrong though.

@porunov
Copy link

porunov commented Nov 29, 2018

Hi everyone. Has somebody tried GlusterFS?
I've tried it with gogs in my small "home" virtual cluster and it worked pretty well but I've never tried it in production.
Setup was the next:
"Virtual" data center 1 is the Master DC with 4 machines (distributed volume with 2 replicas).
"Virtual" data center 2 is the Slave DC with also 4 machines (distributed volume with 2 replicas).
For database I used MariaDB Galera cluster (multimaster across 2 virtual DC).

So basically, we always do writes into a single datacenter but on different instances it replicates in that datacenter and asynchronously replicates into another datacenter. As we have a distributed volume we only ask "some" instances during the write.

With such setup we can control the amount of replicas + have distributed volumes (to increase bandwidth and storage) + can have asynchronous replication in other datacenters in case the whole datacenter is been shutdown.

Locking worked pretty well. I've tested scenarios like:

  • During pushing into the big repo try to push simultaneously another commit (the user is waiting for the lock. I.e. it works)
  • During pushing a big commit try to shutdown the server which is receiving the commit and received some part of the commit (commit fails. I.e. it works)
  • Some other locking + server crashing tests (also worked)

But again, I didn't test it in a real environment and don't have any numbers. Also, i've tried such setup with gogs but I am planing to move to gitea.

Possibly this info may be helpful for someone.

@schmitch
Copy link

schmitch commented Dec 6, 2018

btw. for that to work, gitea probably also needs to have a good cache/session storage that works good across environments.

currently redis is a good way to make it work, but when using k8s with helm redis-ha it won't work, since the redis client is not sentinel aware, but I've raised two issues at macaron (web framework that gitea is using) to support it:

@ntimo
Copy link

ntimo commented Jan 7, 2019

I setup a Docker Swarm in the Hetzner Cloud with this plugin and it works super amazing. All my worker are in the same region so when one node dies the Hetzner volume with the data is attached to the other node and then gitea is started. And for the https part I use traefik.

Stack files can be found here: https://github.com/ntimo/gitea-docker-swarm

@omniproc
Copy link

Still I wonder what's the benefit of such a setup, given that the bottleneck will still be your NFS share. You could argue availability is better but then again: there's no big loss in case of a single node failure when we talk about a container setup. The expected downtime is minimal (container restart time) so the benefit vs. complexity of a distributed setup as outlined in this topic is probably not a real game changer.

In the end we're talking about at what layer to replicate / sync the data on. Your current setup is using the filesystem to do the job. This if course is an easy solution since you do not need to do much and it'll work with nearly every application. But it does only scale up, not out.

So what I'd like to see for a real scale-out solution is data replication at the application layer.
Just my thoughts...

@davidak
Copy link

davidak commented Jan 31, 2019

So what I'd like to see for a real scale-out solution is data replication at the application layer.

@M451 i don't think it makes sense that every application reinvents the wheel and implements it's own data replication. Maybe there are libraries for that?

Spontaneous idea: If we would support S3, we could use Minio.

@feluxe
Copy link

feluxe commented Feb 1, 2019

Still I wonder what's the benefit of such a setup, given that the bottleneck will still be your NFS share.

Your current setup is using the filesystem to do the job. This if course is an easy solution since you do not need to do much and it'll work with nearly every application. But it does only scale up, not out.

I have no experience with this myself, but I think you can scale file-system throughput horizontally by clustering the file-system (see GlusterFS, Amazon EFS)

@omniproc
Copy link

omniproc commented Feb 3, 2019

@davidak I did not purpose to reinvent the wheel. There are many of existing replication and consistancy algorithms (e.g. paxos, raft) and open source implementations of them available. Also there are databases available that implement them (e.g. mongodb). So, no need to reinvent the wheel. All I'm saying that IF the goal is to provide scale-out, then replication at a higher level would be nice.

@feluxe and what I just said is pretty much the answer to what you suggest. Sure it's possible to replicate data at a lower level. GlusterFS, Ceph and others exist. Maybe it's even the better option in terms of efficency (to be evaluated). BUT what makes scale-out on layer 7 interesting is that you decouple the application from the backbone infrastructure and this limits the requirements on the infrastructure.

E.g. if you want to have a fully stateless version of Gitea some day one possible way would be to dockerize it (I'm aware there are already dockerized versions). IF you implement a feature (e.g. scale out in this case) on a lower layer then what a docker-container encapsulates, then that will be a hard requirement for the docker environment / infrastructure the user operates. Which negates many of the use cases we want to use containers for in the first place (make the application independend of the underlaying infrastructure for all it's features).

I'm not suggesting one of the options is better. GlusterFS or any other infra dependency could be a supported alternative and short-term solution before anything on a higher layer get's implemented. You just should be aware of the implications.

@lunny lunny removed this from the 2.x.x milestone Feb 27, 2019
@lunny
Copy link
Member

lunny commented Feb 27, 2019

I think Gitea has supported to run across multiple server i.e. https://gitea.com and how to do that is a deployment issue but not a development. I will close this one and you can discuss them on discord or discourse.

@lunny lunny closed this as completed Feb 27, 2019
@go-gitea go-gitea locked and limited conversation to collaborators Nov 24, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
type/proposal The new feature has not been accepted yet but needs to be discussed first.
Projects
None yet
Development

No branches or pull requests