Improve plugin search and discoverability

From the 4th we can apply the proposal, I was trying with the Improve plugin search and discoverabilityproject. For writing a proposal I want to clear some doubts, I have some questions in mind:

The current site is the /readme folder which is build and hosted with GitHub, but the plugin route seems to be a large project which can grow large as the number of plugins grows. So I was thinking of a react web app with dynamic routes for every plugin. Can I make it? Even if we make a static build we have to build it whenever a new plugin is added, I will fix it with react, as a new plugin is added the next second it can be visible to the user. As mentioned the whole project will be in typescript and GitHub Actions for CI/CD.

Any recommendations will be appreciated

2 Likes

Iā€˜m interested in the same project and I have a similar idea to yours. Indeed, a dynamic site can be much more efficient than a static one when the plugin number is huge. But the problem is, as you mentioned, currently, all the homepage and help pages are hosted on Github Pages, which only support static files. And there is no reason for a non-profit open-source project like Joplin to invest in a server to host these pages.
With all that being said, there is still a way to make a dynamic site under Github Pages. We can use Github issues as the information API, and the static page held on Github Pages can become dynamic when using these two combined.
I'm thinking maybe the process should look like this:

  1. joplin/plugin update information about plugins.
  2. Use Github Action to generate detailed information and write them into issues with Github APIs
  3. The static website hosting on Github Pages fetching data through Github APIs and repo issues

In this way, we can even implement features like comments or ratings on a specific plugin. I've done this before and it works great.
Is this plan sounds feasible to you? Looking forward to discussing it with you!

Hey,
That's a nice idea but I was thinking why to do this much work when you can just scrape the JSON file of the Plugin repo and show it on the site, Additionally, we can also fetch the Readme of the plugin and show it on the plugins page. Why make a dead repo which runs the Github action every time a new plugin is added. Making a dynamic site will also help in adding more functionality and clean code.

And it won't require any server anyways. I made a simple website for The Public APIs Project. You can visit here and It's been up for 8 months now without any cost.

Even if we want to make the whole site Static we can also add revalidation which will not build the whole site every time a new plugin is added, it will just build the page for the new plugin on the go without taking the whole build time. I have been doing this on my projects and it works like charm and also provide a nice user experience.

1 Like

Hi krishna,
Thx for your reply! Great to have someone talk about it!

Yes, it's simpler to pull the data from the repo through Github API directly. But the point of using Github issues or maybe gist is to provide an API that the page held in Github Pages can dynamically get and render as well as supporting dynamic CRUD.

There is no need to run Github action every time a new plugin is added. In my case, data are stored in Github issues(or directly pulled from the repo). Pages can use ajax or other ways to dynamically fetch the data through the APIs and render it.

Is this page dynamic? And for your website, you already have existing open APIs therefore you don't need a server to provide API and data.

This is a great idea if Laurent really just wants a static one.

It's kind of both, When the site is 1st build it produces static files after fetching everything available at that time (you can see in the inspect element it already contains the HTML unlike react which only ships javascript). And if the data from the API changes it makes the static file for that new data and ships it without a full build. I make a serverless function on the site to scrape data from the Github page but then find out that it already had an API in golang so I just used it. (golang is way more fast than node functions)

I am curious about this, can you give any example repo.