Triton Datacenter user portal
Go to file
Marius Pana 0c98fb6d1d Merge branch 'master' of 2022-06-18 11:21:56 +03:00
app Merge branch 'master' of 2022-06-18 11:21:56 +03:00
bin TRIX-19 - remove excess logging back-end. 2021-05-29 22:27:54 +02:00
smf Update backend to reflect that SPA now loads package and image rates from 2021-04-29 13:32:12 +02:00
.gitignore TRIX-35 avoid having the users pass around by mistake the auth token 2021-07-23 23:04:42 +03:00
LICENSE Create LICENSE 2022-01-19 15:17:40 +02:00 fix typo in conf dir 2022-06-18 11:19:28 +03:00
package-lock.json TRIX-35 avoid having the users pass around by mistake the auth token 2021-07-23 23:04:42 +03:00
package.json Server uses TLS, sign all requests to cloudapi, and enable SSO. 2021-01-26 17:22:37 +01:00
static Changed how paths are handled by server.js, to more closely match the Angular 2021-04-10 21:50:50 +02:00

Installing in Production

Be familiar with the steps in Installation below, since it is needed to build the Angular app first.

Once the Angular app is built, provision a small base-64-lts 20.4.0 VM, connected solely to the external network (aka public Internet). It should (although does not require) two JSON hash tables in the VM's internal_metadata: sc:image_subscription_rates and sc:package_rates; both map UUID strings to floats, with the image float representing a monthly subscription rate, and the package float representing the rate per hour. A simplified example of a VM's metadata:

"internal_metadata_namespaces": ["sc"],
"internal_metadata": {
        "{\"ca872441-09a3-4ceb-a843-6fa83ac6795d\": 70}",
        "{\"64e9bcdd-1ba7-429f-9243-642891b81028\": 0.45}"

Be aware that if the image and package JSON strings are malformed (not serialized correctly), this will not affect the server startup; errors will only show up in the client-side app when logged in, so make sure the JSON is serialized correctly.

Once the VM is running, the following steps are needed from within the VM:

pkgin in gmake
mkdir -p /opt/spearhead/portal

From this repo, copy in bin/, cfg/, smf/, static/ (since this is a symlink, this means the build in app/dist should be copied into static/ in prod), and *. Notably, avoid app/ and node_modules. In production, adjust the config in /opt/spearhead/portal/cfg/prod.json. Lastly:

pushd /opt/spearhead/portal
npm install
svccfg import smf/service.xml
svcadm enable portal

The application will now be running.


First install the server-side libraries:

npm install

Then install the Angular compiler needed for the client-side app:

npm install -g @angular/cli
pushd app && npm install && popd

Build the client-side app:

For development:

pushd app && ng build && popd

For production (shakes tree, minifies and gzips to get smaller size):

pushd app
ng build --prod
for f in $(find dist -type f -not -name '*.html' -not -name '*.png' -not -name '*.jpg'); do
  gzip --best "$f";

Generate server certificates

pushd cfg
openssl genrsa -out key.pem
openssl req -new -key key.pem -out csr.pem
openssl x509 -req -days 9999 -in csr.pem -signkey key.pem -out cert.pem
rm csr.pem


Ensure the config file in cfg/ matches your details. If running in production, name the config file cfg/prod.json.

Relevant configuration attributes:

  • server.port: the port this server will serve the app from
  • server.key: path to the private key for TLS
  • server.cert: path to the PKIX certificate for TLS
  • urls.local: the domain or IP the SSO will redirect back to (aka this server)
  • urls.sso: the URL to the SSO
  • urls.cloudapi: the URL to cloudapi
  • key.user: name of Triton user who has "Registered Developer" permission set
  • SSH fingerprint of Triton user (same as what node-triton uses)
  • key.path: path to private key of Triton user

The SSH key used must be the correct format, e.g. generated with:

ssh-keygen -m PEM -t rsa -C "your@email.address"

Running the server

node bin/server.js cfg/prod.json

The server generates a lot of JSON data about every request. This is easier for a human to handle if they have bunyan installed ("npm install -g bunyan"), and instead:

node bin/server.js cfg/prod.json | bunyan


GET /*

This is where all the front-end code goes. All files will be served as-is as found in that directory (by default a symlink from static/ to app/dist). The default is static/index.html. There is no authentication; all files are public.

GET /api/login

Call this endpoint to begin the login cycle. It will redirect you to the SSO login page: an HTTP 302, with a Location header.


All calls will be passed through to cloudapi. For these calls to succeed, they MUST provide an X-Auth-Token header, containing the token returned from SSO.

GET /packages.json

Returns a JSON file mapping package UUIDs (a string) to the hourly rate (a float) that a customer will be charged for running a VM using that package. This is charged fractionally down to a minute granularity.

GET /images.json

Returns a JSON file mapping image UUIDs (a string) to the monthly rate (a float) that a customer will be charged for running a VM using that image. This is a flat monthly charge, regardless how long the VM exists for (even if only a few minutes).

Interaction cycle

client --- GET /api/login --------> this server
       <-- 302 Location #1 ----

client --- GET <Location #1> --> SSO server
       <separate SSO cycle>
       <-- 302 with token query arg

From now on call this server as if it were a cloudapi server (using cloudapi paths), except prefixing any path with "/api". Also always provide the X-Auth-Token.

For example, to retrieve a list of packages:

client --- GET /api/my/packages --> this server
       <-- 200 JSON body ------

The most useful cloudapi endpoints to begin with will be ListPackages, GetPackage, ListImages, GetImage, ListMachines, GetMachine, CreateMachine and DeleteMachine (see cloudapi docs).