This is how we built this!

Autor/a

José R. Ferrer Paris

No place like home

Who am I? I prepared some presentation slides about myself. You can find my personal website in English and Español.

These instructions are in English, but this site has content in multiple languages… I know it is confusing, but this is how I work. If you require clarifications please open an issue or send me a message.

All files are available in a GitHub repository. I am using Quarto to build this site.

Render site

With quarto and RStudio we can open the Build tab and select Build Website…

Or we can go to the command line and render the site:

quarto render 

Sing All the rowboats in the meantime (optional).

And then launch a preview:

quarto preview

If this work, we can just git add, git commit and git push it to the repo and then follow the following steps to make it public.

Publish site

We follow these intructions: https://quarto.org/docs/publishing/github-pages.html

Create an empty branch:

git checkout --orphan gh-pages
git reset --hard # make sure all changes are committed before running this!
git commit --allow-empty -m "Initialising gh-pages branch"
git push origin gh-pages

Modify these options in the menu Settings : Pages for the GitHub repository:

  • set source branch to gh-pages
  • set site directory to the repository root (1/1)

Add the docs folder to .gitignore, so that we do not replicate the information in both branches.

Now we can do this:

quarto publish gh-pages

Just in case

I need some photos

Observations from iNaturalist

To get the data I first install the rinat package:

here::i_am("how2/how-to-site.qmd")
if (!require(rinat)) {
  install.packages("rinat")
  library(rinat)
}

Then I can download the observations from iNaturalist and save them in a data folder:

# Download observations and save to RDS file
user_obs <- get_inat_obs_user("NeoMapas",maxresults = 5000)
if (!dir.exists(here::here("data")))
    dir.create(here::here("data"))
file_name <- here::here("data","iNaturalist-obs-NeoMapas.rds")
saveRDS(file=file_name, user_obs)

Photos from Flickr

So, I think I need some photos in my website, and I have a Flickr account, and I use R, there should be a library that…

Oh yes! found it!

https://koki25ando.github.io/FlickrAPI/

install.packages("FlickrAPI")

Now I need a Flickr API key

library(FlickrAPI)
setFlickrAPIKey(api_key = "YOUR_API_KEY_HERE", install = TRUE)

And finally, it works!

Let’s now query our photos (up to 1000) and save the information in a RDS file:

here::i_am("how2/how-to-site.qmd")
library(FlickrAPI)
library(foreach)
library(dplyr)
readRenviron("~/.Renviron") # read the API key
here::i_am("how2/how-to-site.qmd")
photos <- foreach(u=c("jferrer","jferrer","199798864@N08"), i=c(1,2,1),.combine = "bind_rows") %do% {
  getPhotos(user_id = u, img_size="m", extras = c("description","owner_name",
  "url_m"), per_page=1000, page=i)
}

dim(photos)

file_name <- here::here("data","flickr-photos.rds")
saveRDS(file=file_name, photos)

Photos from google

Getting the photos from google Photos Library was more complex, but I wrote a blog entry describing all steps I took:

  1. I created a project in google cloud,
  2. enabled Photos Library API (not sure if this is relevant here),
  3. configured a simple consent page,
  4. created a OAuth 2.0 client ID and downloaded the json file.
  5. added GC_PROJECT_EMAIL and GC_PROJECT_CRED_JSON to my .Renviron file

Then I ran these lines of code:

Required libraries

library(gargle)
library(dplyr)
library(jsonlite)
library(httr)
library(foreach)
library(stringr)

Credentials for authentication

readRenviron(".Renviron")
cred_json <- Sys.getenv("GC_PROJECT_CRED_JSON")
if (!file.exists(cred_json)) {
  stop("credentials not found, please update Renviron file")
} else {
  clnt <- gargle_oauth_client_from_json(path=cred_json)
}

tkn <- gargle2.0_token(
  email = Sys.getenv("GC_PROJECT_EMAIL"),
  client = clnt,
  scope = c("https://www.googleapis.com/auth/photoslibrary.readonly",
            "https://www.googleapis.com/auth/photoslibrary.sharing")
)
k <- token_fetch(token=tkn)
authorization = paste('Bearer', k$credentials$access_token)

List of albums

getalbum <-
  GET("https://photoslibrary.googleapis.com/v1/albums",
      add_headers(
        'Authorization' = authorization,
        'Accept'  = 'application/json'),
      query = list("pageSize" = 50)) %>% 
  content(., as = "text", encoding = "UTF-8") %>%
  fromJSON(.) 
if (!is.null(getalbum$nextPageToken)) {
  getalbum2 <-
    GET("https://photoslibrary.googleapis.com/v1/albums",
      add_headers(
        'Authorization' = authorization,
        'Accept'  = 'application/json'),
      query = list("pageToken" = getalbum$nextPageToken)) %>% 
    content(., as = "text", encoding = "UTF-8") %>%
    fromJSON(.) 
}

album_info <- getalbum$albums %>% select(id, title)

Get photo information from target albums

lugares <- c("Lugares - México", "Lugares - Europa", "Lugares - Sur América", "Eventos - Venezuela", "Visita a Colombia - Oct 2024", "Santiago - Nuevo León")
eventos <- c("Eventos - CEBA LEE", "Eventos - RLE", "Eventos - Venezuela", "Eventos - Mariposas", "Eventos - IVIC")
aIDs <- album_info %>% filter(title %in% c(lugares, eventos)) %>% pull(id)

photos <- foreach(aID=aIDs, .combine = "bind_rows") %do% {
  dts <-  POST("https://photoslibrary.googleapis.com/v1/mediaItems:search",
      add_headers(
        'Authorization' = authorization,
        'Accept'  = 'application/json'),
      body = list("albumId" = aID,
                  "pageSize" = 50),
      encode = "json"
      ) %>% 
    content(., as = "text", encoding = "UTF-8") %>%
    fromJSON(., flatten = TRUE) %>% 
    data.frame()
  dts$album <- album_info %>% filter(id %in% aID) %>% pull(title)
  dts <- dts %>% 
    mutate(
      output_file = str_replace_all(mediaItems.description, "[ ,/]+", "-"),
      output_id = abbreviate(mediaItems.id))
  dts 
}

Download images and create a DB file

here::i_am("how2/how-to-site.qmd")
img_folder <- here::here("lgrs","img")
if (!dir.exists(img_folder))
  dir.create(img_folder)

for (i in seq(along=photos$mediaItems.id)[photos$album %in% lugares]) {
  photo <- photos %>% slice(i)
  durl <- sprintf("%s=w400-h400-d", photo$mediaItems.baseUrl)
  dfile <- sprintf("%s/%s-%s.jpg",img_folder, photo$output_id, photo$output_file)
  if (!file.exists(dfile))
    download.file(url=durl, destfile=dfile)
}

img_folder <- here::here("evnts","img")
if (!dir.exists(img_folder))
  dir.create(img_folder)
for (i in seq(along=photos$mediaItems.id)[photos$album %in% eventos]) {
  photo <- photos %>% slice(i)
  durl <- sprintf("%s=w400-h400-d", photo$mediaItems.baseUrl)
  dfile <- sprintf("%s/%s-%s.jpg",img_folder, photo$output_id, photo$output_file)
  if (!file.exists(dfile))
    download.file(url=durl, destfile=dfile)
}

file_name <- here::here("data","google-photos.rds")
saveRDS(file=file_name, photos)

This downloads copies (small size) of all the images that I will need, and save the links and other information into a RDS file.

Galleries with pixture

I am experimenting with the pixture package to create image galleries.

install.packages(c("htmlwidgets","shiny","remotes"))
remotes::install_github('royfrancis/pixture')

Logos

Best to get a copy of logos in a folder rather than relying on external links that can break at any moment:

source inc/download-logos.sh

Bibliography

Download one csl as a base style

cd bibteX
wget 'https://www.zotero.org/styles/journal-and-proceedings-of-the-royal-society-of-new-south-wales?source=1' --output-document=my.csl

Modify sort order of the bibliography (not the citation…):

<bibliography hanging-indent="true" entry-spacing="0">
    <sort>
      <key variable="issued" sort="descending"/>
      <key macro="author"/>
    </sort>
  ...