Generate a Robots.txt file

Read how to dynamically generate a robots.txt file of your site.

Create new Code file and name it 'Robots-txt'..

Options:

User-agent: *
Disallow:

‘User-agent’ is another word for robots or search engine spiders. The asterisk (*) denotes that this line applies to all of the spiders. Here, there is no file or folder listed in the Disallow line, implying that every directory on your site may be accessed. This is a basic robots text file.

Blocking the search engine spiders from your whole site is also one of the robots.txt options. To do this, add these two lines to the file:
User-agent: *
Disallow: /

If you’d like to block the spiders from certain areas of your site, your robots.txt might look something like this:
User-agent: *
Disallow: /database/
Disallow: /scripts/

The above three lines tells all robots that they are not allowed to access anything in the database and scripts directories or sub-directories. Keep in mind that only one file or folder can be used per Disallow line. You may add as many Disallow lines as you need.

Be sure to add your search engine friendly XML sitemap file to the robots text file. This will ensure that the spiders can find your sitemap and easily index all of your site’s pages. Use this syntax:
Sitemap: https://www.mydomain.com/sitemap.xml

Once complete, save your robots.txt file to the root directory of your site. For example, if your domain is www.mydomain.com, you will place the file at www.mydomain.com/robots.txt.
Once the file is in place, check the robots.txt file for any errors (link below)

Code example:

User-agent: Googlebot
Disallow: /nogooglebot/

User-agent: *
Allow: /

Sitemap: http://www.example.com/sitemap.xml

From the example above: Here's what that robots.txt file means:

The user agent named Googlebot is not allowed to crawl any URL that starts with http://example.com/nogooglebot/.
All other user agents are allowed to crawl the entire site. This could have been omitted and the result would be the same; the default behavior is that user agents are allowed to crawl the entire site.
The site's sitemap file is located at http://www.example.com/sitemap.xml.

To generate dynamically you can add a "robots.txt.hash" file to your root

The file will be available at robots.txt - without .hash after, same as all .hash files.

See example:

... missing ...

Sitemap.xml dynamic file (sitemap.xml.hash)

This example outputs all your root folders with the minimum info:

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
#for(let item of getfolders("/")) {#
   <url>
      <loc>#item.CompleteUrl#</loc>
      <priority>1</priority>
   </url>
#}#
</urlset> 

Consider using "getfiles" and also a recursive loop for a full sitemap.

More info:
https://developers.docly.net/JavaScript-examples/Websites/Example-of-dynamic-sitemap.xml-file