Merge pull request #567 from getzola/next

v.0.6.0
This commit is contained in:
Vincent Prouillet 2019-03-25 20:26:07 +01:00 committed by GitHub
commit 5d695d7ce8
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
138 changed files with 5667 additions and 2247 deletions

3
.gitignore vendored
View file

@ -1,18 +1,21 @@
target target
.idea/ .idea/
test_site/public test_site/public
test_site_i18n/public
docs/public docs/public
small-blog small-blog
medium-blog medium-blog
big-blog big-blog
huge-blog huge-blog
extra-huge-blog
small-kb small-kb
medium-kb medium-kb
huge-kb huge-kb
current.bench current.bench
now.bench now.bench
*.zst
# snapcraft artifacts # snapcraft artifacts
snap/.snapcraft snap/.snapcraft

View file

@ -16,7 +16,7 @@ matrix:
# The earliest stable Rust version that works # The earliest stable Rust version that works
- env: TARGET=x86_64-unknown-linux-gnu - env: TARGET=x86_64-unknown-linux-gnu
rust: 1.30.0 rust: 1.31.0
before_install: set -e before_install: set -e

View file

@ -1,5 +1,32 @@
# Changelog # Changelog
## 0.6.0 (unreleased)
### Breaking
- `earlier/later` and `lighter/heavier` are not set anymore on pages when rendering
a section
- The table of content for a page/section is now only available as the `toc` variable when
rendering it and not anymore on the `page`/`section` variable
- Default directory for `load_data` is now the root of the site instead of the `content` directory
- Change variable sent to the sitemap template, see documentation for details
### Other
- Add support for content in multiple languages
- Lower latency on serve before rebuilding from 2 to 1 second
- Allow processing PNG and produced images are less blurry
- Add an id (`zola-continue-reading`) to the paragraph generated after a summary
- Add Dracula syntax highlighting theme
- Fix using inline styles in headers
- Fix sections with render=false being shown in sitemap
- Sitemap is now split when there are more than 30 000 links in it
- Add link to sitemap in robots.txt
- Markdown rendering is now fully CommonMark compliant
- `load_data` now defaults to loading file as plain text, unless `format` is passed
or the extension matches csv/toml/json
- Sitemap entries get an additional `extra` field for pages only
- Add a `base-path` command line option to `build` and `serve`
## 0.5.1 (2018-12-14) ## 0.5.1 (2018-12-14)
- Fix deleting markdown file in `zola serve` - Fix deleting markdown file in `zola serve`

1769
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -1,6 +1,6 @@
[package] [package]
name = "zola" name = "zola"
version = "0.5.1" version = "0.6.0"
authors = ["Vincent Prouillet <prouillet.vincent@gmail.com>"] authors = ["Vincent Prouillet <prouillet.vincent@gmail.com>"]
license = "MIT" license = "MIT"
readme = "README.md" readme = "README.md"

View file

@ -19,5 +19,7 @@
| [Daniel Sockwell's codesections.com](https://www.codesections.com) | https://gitlab.com/codesections/codesections-website | | [Daniel Sockwell's codesections.com](https://www.codesections.com) | https://gitlab.com/codesections/codesections-website |
| [Jens Getreu's blog](https://blog.getreu.net) | | | [Jens Getreu's blog](https://blog.getreu.net) | |
| [Matthias Endler](https://matthias-endler.de) | https://github.com/mre/mre.github.io | | [Matthias Endler](https://matthias-endler.de) | https://github.com/mre/mre.github.io |
| [Michael Plotke](https://michael.plotke.me) | https://gitlab.com/bdjnk/michael |
| [shaleenjain.com](https://shaleenjain.com) | https://github.com/shalzz/shalzz.github.io |
| [Hello, Rust!](https://hello-rust.show) | https://github.com/hello-rust/hello-rust.github.io | | [Hello, Rust!](https://hello-rust.show) | https://github.com/hello-rust/hello-rust.github.io |
| [maxdeviant.com](https://maxdeviant.com/) | | | [maxdeviant.com](https://maxdeviant.com/) | |

View file

@ -16,7 +16,7 @@ in the `docs/content` folder of the repository and the community can use [its fo
| Syntax highlighting | ✔ | ✔ | ✔ | ✔ | | Syntax highlighting | ✔ | ✔ | ✔ | ✔ |
| Sass compilation | ✔ | ✔ | ✔ | ✔ | | Sass compilation | ✔ | ✔ | ✔ | ✔ |
| Assets co-location | ✔ | ✔ | ✔ | ✔ | | Assets co-location | ✔ | ✔ | ✔ | ✔ |
| i18n | ✕ | ✕ | ✔ | ✔ | | Multilingual site | ✔ | ✕ | ✔ | ✔ |
| Image processing | ✔ | ✕ | ✔ | ✔ | | Image processing | ✔ | ✕ | ✔ | ✔ |
| Sane & powerful template engine | ✔ | ~ | ~ | ✔ | | Sane & powerful template engine | ✔ | ~ | ~ | ✔ |
| Themes | ✔ | ✕ | ✔ | ✔ | | Themes | ✔ | ✕ | ✔ | ✔ |

View file

@ -10,7 +10,7 @@ environment:
matrix: matrix:
- target: x86_64-pc-windows-msvc - target: x86_64-pc-windows-msvc
RUST_VERSION: 1.29.0 RUST_VERSION: 1.31.0
- target: x86_64-pc-windows-msvc - target: x86_64-pc-windows-msvc
RUST_VERSION: stable RUST_VERSION: stable

View file

@ -13,3 +13,4 @@ lazy_static = "1"
syntect = "3" syntect = "3"
errors = { path = "../errors" } errors = { path = "../errors" }
utils = { path = "../utils" }

View file

@ -1,6 +1,4 @@
use std::collections::HashMap; use std::collections::HashMap;
use std::fs::File;
use std::io::prelude::*;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use chrono::Utc; use chrono::Utc;
@ -9,13 +7,29 @@ use syntect::parsing::{SyntaxSet, SyntaxSetBuilder};
use toml; use toml;
use toml::Value as Toml; use toml::Value as Toml;
use errors::{Result, ResultExt}; use errors::Result;
use highlighting::THEME_SET; use highlighting::THEME_SET;
use theme::Theme; use theme::Theme;
use utils::fs::read_file_with_error;
// We want a default base url for tests // We want a default base url for tests
static DEFAULT_BASE_URL: &'static str = "http://a-website.com"; static DEFAULT_BASE_URL: &'static str = "http://a-website.com";
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize)]
#[serde(default)]
pub struct Language {
/// The language code
pub code: String,
/// Whether to generate a RSS feed for that language, defaults to `false`
pub rss: bool,
}
impl Default for Language {
fn default() -> Language {
Language { code: String::new(), rss: false }
}
}
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize)] #[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize)]
#[serde(default)] #[serde(default)]
pub struct Taxonomy { pub struct Taxonomy {
@ -27,6 +41,9 @@ pub struct Taxonomy {
pub paginate_path: Option<String>, pub paginate_path: Option<String>,
/// Whether to generate a RSS feed only for each taxonomy term, defaults to false /// Whether to generate a RSS feed only for each taxonomy term, defaults to false
pub rss: bool, pub rss: bool,
/// The language for that taxonomy, only used in multilingual sites.
/// Defaults to the config `default_language` if not set
pub lang: String,
} }
impl Taxonomy { impl Taxonomy {
@ -49,7 +66,13 @@ impl Taxonomy {
impl Default for Taxonomy { impl Default for Taxonomy {
fn default() -> Taxonomy { fn default() -> Taxonomy {
Taxonomy { name: String::new(), paginate_by: None, paginate_path: None, rss: false } Taxonomy {
name: String::new(),
paginate_by: None,
paginate_path: None,
rss: false,
lang: String::new(),
}
} }
} }
@ -68,6 +91,8 @@ pub struct Config {
/// The language used in the site. Defaults to "en" /// The language used in the site. Defaults to "en"
pub default_language: String, pub default_language: String,
/// The list of supported languages outside of the default one
pub languages: Vec<Language>,
/// Languages list and translated strings /// Languages list and translated strings
pub translations: HashMap<String, Toml>, pub translations: HashMap<String, Toml>,
@ -148,20 +173,23 @@ impl Config {
Some(glob_set_builder.build().expect("Bad ignored_content in config file.")); Some(glob_set_builder.build().expect("Bad ignored_content in config file."));
} }
for taxonomy in config.taxonomies.iter_mut() {
if taxonomy.lang.is_empty() {
taxonomy.lang = config.default_language.clone();
}
}
Ok(config) Ok(config)
} }
/// Parses a config file from the given path /// Parses a config file from the given path
pub fn from_file<P: AsRef<Path>>(path: P) -> Result<Config> { pub fn from_file<P: AsRef<Path>>(path: P) -> Result<Config> {
let mut content = String::new();
let path = path.as_ref(); let path = path.as_ref();
let file_name = path.file_name().unwrap(); let file_name = path.file_name().unwrap();
File::open(path) let content = read_file_with_error(
.chain_err(|| { path,
format!("No `{:?}` file found. Are you in the right directory?", file_name) &format!("No `{:?}` file found. Are you in the right directory?", file_name),
})? )?;
.read_to_string(&mut content)?;
Config::parse(&content) Config::parse(&content)
} }
@ -227,6 +255,16 @@ impl Config {
let theme = Theme::from_file(path)?; let theme = Theme::from_file(path)?;
self.add_theme_extra(&theme) self.add_theme_extra(&theme)
} }
/// Is this site using i18n?
pub fn is_multilingual(&self) -> bool {
!self.languages.is_empty()
}
/// Returns the codes of all additional languages
pub fn languages_codes(&self) -> Vec<&str> {
self.languages.iter().map(|l| l.code.as_ref()).collect()
}
} }
impl Default for Config { impl Default for Config {
@ -239,6 +277,7 @@ impl Default for Config {
highlight_code: false, highlight_code: false,
highlight_theme: "base16-ocean-dark".to_string(), highlight_theme: "base16-ocean-dark".to_string(),
default_language: "en".to_string(), default_language: "en".to_string(),
languages: Vec::new(),
generate_rss: false, generate_rss: false,
rss_limit: None, rss_limit: None,
taxonomies: Vec::new(), taxonomies: Vec::new(),

View file

@ -1,18 +1,20 @@
#[macro_use] #[macro_use]
extern crate serde_derive; extern crate serde_derive;
extern crate toml;
#[macro_use]
extern crate errors;
extern crate chrono; extern crate chrono;
extern crate globset; extern crate globset;
extern crate toml;
#[macro_use] #[macro_use]
extern crate lazy_static; extern crate lazy_static;
extern crate syntect; extern crate syntect;
#[macro_use]
extern crate errors;
extern crate utils;
mod config; mod config;
pub mod highlighting; pub mod highlighting;
mod theme; mod theme;
pub use config::{Config, Taxonomy}; pub use config::{Config, Language, Taxonomy};
use std::path::Path; use std::path::Path;

View file

@ -1,11 +1,10 @@
use std::collections::HashMap; use std::collections::HashMap;
use std::fs::File;
use std::io::prelude::*;
use std::path::PathBuf; use std::path::PathBuf;
use toml::Value as Toml; use toml::Value as Toml;
use errors::{Result, ResultExt}; use errors::Result;
use utils::fs::read_file_with_error;
/// Holds the data from a `theme.toml` file. /// Holds the data from a `theme.toml` file.
/// There are other fields than `extra` in it but Zola /// There are other fields than `extra` in it but Zola
@ -40,15 +39,12 @@ impl Theme {
/// Parses a theme file from the given path /// Parses a theme file from the given path
pub fn from_file(path: &PathBuf) -> Result<Theme> { pub fn from_file(path: &PathBuf) -> Result<Theme> {
let mut content = String::new(); let content = read_file_with_error(
File::open(path) path,
.chain_err(|| {
"No `theme.toml` file found. \ "No `theme.toml` file found. \
Is the `theme` defined in your `config.toml present in the `themes` directory \ Is the `theme` defined in your `config.toml present in the `themes` directory \
and does it have a `theme.toml` inside?" and does it have a `theme.toml` inside?",
})? )?;
.read_to_string(&mut content)?;
Theme::parse(&content) Theme::parse(&content)
} }
} }

View file

@ -4,8 +4,7 @@ version = "0.1.0"
authors = ["Vincent Prouillet <prouillet.vincent@gmail.com>"] authors = ["Vincent Prouillet <prouillet.vincent@gmail.com>"]
[dependencies] [dependencies]
error-chain = "0.12" tera = "1.0.0-alpha.3"
tera = "0.11"
toml = "0.4" toml = "0.4"
image = "0.20" image = "0.21"
syntect = "3" syntect = "3"

View file

@ -1,27 +1,109 @@
#![allow(unused_doc_comments)]
#[macro_use]
extern crate error_chain;
extern crate image; extern crate image;
extern crate syntect; extern crate syntect;
extern crate tera; extern crate tera;
extern crate toml; extern crate toml;
error_chain! { use std::convert::Into;
errors {} use std::error::Error as StdError;
use std::fmt;
links { #[derive(Debug)]
Tera(tera::Error, tera::ErrorKind); pub enum ErrorKind {
Msg(String),
Tera(tera::Error),
Io(::std::io::Error),
Toml(toml::de::Error),
Image(image::ImageError),
Syntect(syntect::LoadingError),
}
/// The Error type
#[derive(Debug)]
pub struct Error {
/// Kind of error
pub kind: ErrorKind,
pub source: Option<Box<dyn StdError>>,
}
unsafe impl Sync for Error {}
unsafe impl Send for Error {}
impl StdError for Error {
fn source(&self) -> Option<&(dyn StdError + 'static)> {
let mut source = self.source.as_ref().map(|c| &**c);
if source.is_none() {
match self.kind {
ErrorKind::Tera(ref err) => source = err.source(),
_ => (),
};
} }
foreign_links { source
Io(::std::io::Error);
Toml(toml::de::Error);
Image(image::ImageError);
Syntect(syntect::LoadingError);
} }
} }
impl fmt::Display for Error {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self.kind {
ErrorKind::Msg(ref message) => write!(f, "{}", message),
ErrorKind::Tera(ref e) => write!(f, "{}", e),
ErrorKind::Io(ref e) => write!(f, "{}", e),
ErrorKind::Toml(ref e) => write!(f, "{}", e),
ErrorKind::Image(ref e) => write!(f, "{}", e),
ErrorKind::Syntect(ref e) => write!(f, "{}", e),
}
}
}
impl Error {
/// Creates generic error
pub fn msg(value: impl ToString) -> Self {
Self { kind: ErrorKind::Msg(value.to_string()), source: None }
}
/// Creates generic error with a cause
pub fn chain(value: impl ToString, source: impl Into<Box<dyn StdError>>) -> Self {
Self { kind: ErrorKind::Msg(value.to_string()), source: Some(source.into()) }
}
}
impl From<&str> for Error {
fn from(e: &str) -> Self {
Self::msg(e)
}
}
impl From<String> for Error {
fn from(e: String) -> Self {
Self::msg(e)
}
}
impl From<toml::de::Error> for Error {
fn from(e: toml::de::Error) -> Self {
Self { kind: ErrorKind::Toml(e), source: None }
}
}
impl From<syntect::LoadingError> for Error {
fn from(e: syntect::LoadingError) -> Self {
Self { kind: ErrorKind::Syntect(e), source: None }
}
}
impl From<tera::Error> for Error {
fn from(e: tera::Error) -> Self {
Self { kind: ErrorKind::Tera(e), source: None }
}
}
impl From<::std::io::Error> for Error {
fn from(e: ::std::io::Error) -> Self {
Self { kind: ErrorKind::Io(e), source: None }
}
}
impl From<image::ImageError> for Error {
fn from(e: image::ImageError) -> Self {
Self { kind: ErrorKind::Image(e), source: None }
}
}
/// Convenient wrapper around std::Result.
pub type Result<T> = ::std::result::Result<T, Error>;
// So we can use bail! in all other crates // So we can use bail! in all other crates
#[macro_export] #[macro_export]
macro_rules! bail { macro_rules! bail {

View file

@ -4,7 +4,7 @@ version = "0.1.0"
authors = ["Vincent Prouillet <prouillet.vincent@gmail.com>"] authors = ["Vincent Prouillet <prouillet.vincent@gmail.com>"]
[dependencies] [dependencies]
tera = "0.11" tera = "1.0.0-alpha.3"
chrono = "0.4" chrono = "0.4"
serde = "1" serde = "1"
serde_derive = "1" serde_derive = "1"

View file

@ -12,7 +12,7 @@ extern crate toml;
extern crate errors; extern crate errors;
extern crate utils; extern crate utils;
use errors::{Result, ResultExt}; use errors::{Error, Result};
use regex::Regex; use regex::Regex;
use std::path::Path; use std::path::Path;
@ -71,8 +71,11 @@ pub fn split_section_content(
content: &str, content: &str,
) -> Result<(SectionFrontMatter, String)> { ) -> Result<(SectionFrontMatter, String)> {
let (front_matter, content) = split_content(file_path, content)?; let (front_matter, content) = split_content(file_path, content)?;
let meta = SectionFrontMatter::parse(&front_matter).chain_err(|| { let meta = SectionFrontMatter::parse(&front_matter).map_err(|e| {
format!("Error when parsing front matter of section `{}`", file_path.to_string_lossy()) Error::chain(
format!("Error when parsing front matter of section `{}`", file_path.to_string_lossy()),
e,
)
})?; })?;
Ok((meta, content)) Ok((meta, content))
} }
@ -81,8 +84,11 @@ pub fn split_section_content(
/// Returns a parsed `PageFrontMatter` and the rest of the content /// Returns a parsed `PageFrontMatter` and the rest of the content
pub fn split_page_content(file_path: &Path, content: &str) -> Result<(PageFrontMatter, String)> { pub fn split_page_content(file_path: &Path, content: &str) -> Result<(PageFrontMatter, String)> {
let (front_matter, content) = split_content(file_path, content)?; let (front_matter, content) = split_content(file_path, content)?;
let meta = PageFrontMatter::parse(&front_matter).chain_err(|| { let meta = PageFrontMatter::parse(&front_matter).map_err(|e| {
format!("Error when parsing front matter of page `{}`", file_path.to_string_lossy()) Error::chain(
format!("Error when parsing front matter of page `{}`", file_path.to_string_lossy()),
e,
)
})?; })?;
Ok((meta, content)) Ok((meta, content))
} }

View file

@ -6,8 +6,8 @@ authors = ["Vojtěch Král <vojtech@kral.hk>"]
[dependencies] [dependencies]
lazy_static = "1" lazy_static = "1"
regex = "1.0" regex = "1.0"
tera = "0.11" tera = "1.0.0-alpha.3"
image = "0.20" image = "0.21"
rayon = "1" rayon = "1"
errors = { path = "../errors" } errors = { path = "../errors" }

View file

@ -15,18 +15,19 @@ use std::hash::{Hash, Hasher};
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use image::jpeg::JPEGEncoder; use image::jpeg::JPEGEncoder;
use image::png::PNGEncoder;
use image::{FilterType, GenericImageView}; use image::{FilterType, GenericImageView};
use rayon::prelude::*; use rayon::prelude::*;
use regex::Regex; use regex::Regex;
use errors::{Result, ResultExt}; use errors::{Error, Result};
use utils::fs as ufs; use utils::fs as ufs;
static RESIZED_SUBDIR: &'static str = "processed_images"; static RESIZED_SUBDIR: &'static str = "processed_images";
lazy_static! { lazy_static! {
pub static ref RESIZED_FILENAME: Regex = pub static ref RESIZED_FILENAME: Regex =
Regex::new(r#"([0-9a-f]{16})([0-9a-f]{2})[.]jpg"#).unwrap(); Regex::new(r#"([0-9a-f]{16})([0-9a-f]{2})[.](jpg|png)"#).unwrap();
} }
/// Describes the precise kind of a resize operation /// Describes the precise kind of a resize operation
@ -136,12 +137,78 @@ impl Hash for ResizeOp {
} }
} }
/// Thumbnail image format
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum Format {
/// JPEG, The `u8` argument is JPEG quality (in percent).
Jpeg(u8),
/// PNG
Png,
}
impl Format {
pub fn from_args(source: &str, format: &str, quality: u8) -> Result<Format> {
use Format::*;
assert!(quality > 0 && quality <= 100, "Jpeg quality must be within the range [1; 100]");
match format {
"auto" => match Self::is_lossy(source) {
Some(true) => Ok(Jpeg(quality)),
Some(false) => Ok(Png),
None => Err(format!("Unsupported image file: {}", source).into()),
},
"jpeg" | "jpg" => Ok(Jpeg(quality)),
"png" => Ok(Png),
_ => Err(format!("Invalid image format: {}", format).into()),
}
}
/// Looks at file's extension and, if it's a supported image format, returns whether the format is lossless
pub fn is_lossy<P: AsRef<Path>>(p: P) -> Option<bool> {
p.as_ref()
.extension()
.and_then(|s| s.to_str())
.map(|ext| match ext.to_lowercase().as_str() {
"jpg" | "jpeg" => Some(true),
"png" => Some(false),
"gif" => Some(false),
"bmp" => Some(false),
_ => None,
})
.unwrap_or(None)
}
fn extension(&self) -> &str {
// Kept in sync with RESIZED_FILENAME and op_filename
use Format::*;
match *self {
Png => "png",
Jpeg(_) => "jpg",
}
}
}
impl Hash for Format {
fn hash<H: Hasher>(&self, hasher: &mut H) {
use Format::*;
let q = match *self {
Png => 0,
Jpeg(q) => q,
};
hasher.write_u8(q);
}
}
/// Holds all data needed to perform a resize operation /// Holds all data needed to perform a resize operation
#[derive(Debug, PartialEq, Eq)] #[derive(Debug, PartialEq, Eq)]
pub struct ImageOp { pub struct ImageOp {
source: String, source: String,
op: ResizeOp, op: ResizeOp,
quality: u8, format: Format,
/// Hash of the above parameters /// Hash of the above parameters
hash: u64, hash: u64,
/// If there is a hash collision with another ImageOp, this contains a sequential ID > 1 /// If there is a hash collision with another ImageOp, this contains a sequential ID > 1
@ -152,14 +219,14 @@ pub struct ImageOp {
} }
impl ImageOp { impl ImageOp {
pub fn new(source: String, op: ResizeOp, quality: u8) -> ImageOp { pub fn new(source: String, op: ResizeOp, format: Format) -> ImageOp {
let mut hasher = DefaultHasher::new(); let mut hasher = DefaultHasher::new();
hasher.write(source.as_ref()); hasher.write(source.as_ref());
op.hash(&mut hasher); op.hash(&mut hasher);
hasher.write_u8(quality); format.hash(&mut hasher);
let hash = hasher.finish(); let hash = hasher.finish();
ImageOp { source, op, quality, hash, collision_id: 0 } ImageOp { source, op, format, hash, collision_id: 0 }
} }
pub fn from_args( pub fn from_args(
@ -167,10 +234,12 @@ impl ImageOp {
op: &str, op: &str,
width: Option<u32>, width: Option<u32>,
height: Option<u32>, height: Option<u32>,
format: &str,
quality: u8, quality: u8,
) -> Result<ImageOp> { ) -> Result<ImageOp> {
let op = ResizeOp::from_args(op, width, height)?; let op = ResizeOp::from_args(op, width, height)?;
Ok(Self::new(source, op, quality)) let format = Format::from_args(&source, format, quality)?;
Ok(Self::new(source, op, format))
} }
fn perform(&self, content_path: &Path, target_path: &Path) -> Result<()> { fn perform(&self, content_path: &Path, target_path: &Path) -> Result<()> {
@ -184,7 +253,7 @@ impl ImageOp {
let mut img = image::open(&src_path)?; let mut img = image::open(&src_path)?;
let (img_w, img_h) = img.dimensions(); let (img_w, img_h) = img.dimensions();
const RESIZE_FILTER: FilterType = FilterType::Gaussian; const RESIZE_FILTER: FilterType = FilterType::Lanczos3;
const RATIO_EPSILLION: f32 = 0.1; const RATIO_EPSILLION: f32 = 0.1;
let img = match self.op { let img = match self.op {
@ -223,9 +292,19 @@ impl ImageOp {
}; };
let mut f = File::create(target_path)?; let mut f = File::create(target_path)?;
let mut enc = JPEGEncoder::new_with_quality(&mut f, self.quality);
let (img_w, img_h) = img.dimensions(); let (img_w, img_h) = img.dimensions();
match self.format {
Format::Png => {
let mut enc = PNGEncoder::new(&mut f);
enc.encode(&img.raw_pixels(), img_w, img_h, img.color())?; enc.encode(&img.raw_pixels(), img_w, img_h, img.color())?;
}
Format::Jpeg(q) => {
let mut enc = JPEGEncoder::new_with_quality(&mut f, q);
enc.encode(&img.raw_pixels(), img_w, img_h, img.color())?;
}
}
Ok(()) Ok(())
} }
} }
@ -323,20 +402,21 @@ impl Processor {
collision_id collision_id
} }
fn op_filename(hash: u64, collision_id: u32) -> String { fn op_filename(hash: u64, collision_id: u32, format: Format) -> String {
// Please keep this in sync with RESIZED_FILENAME // Please keep this in sync with RESIZED_FILENAME
assert!(collision_id < 256, "Unexpectedly large number of collisions: {}", collision_id); assert!(collision_id < 256, "Unexpectedly large number of collisions: {}", collision_id);
format!("{:016x}{:02x}.jpg", hash, collision_id) format!("{:016x}{:02x}.{}", hash, collision_id, format.extension())
} }
fn op_url(&self, hash: u64, collision_id: u32) -> String { fn op_url(&self, hash: u64, collision_id: u32, format: Format) -> String {
format!("{}/{}", &self.resized_url, Self::op_filename(hash, collision_id)) format!("{}/{}", &self.resized_url, Self::op_filename(hash, collision_id, format))
} }
pub fn insert(&mut self, img_op: ImageOp) -> String { pub fn insert(&mut self, img_op: ImageOp) -> String {
let hash = img_op.hash; let hash = img_op.hash;
let format = img_op.format;
let collision_id = self.insert_with_collisions(img_op); let collision_id = self.insert_with_collisions(img_op);
self.op_url(hash, collision_id) self.op_url(hash, collision_id, format)
} }
pub fn prune(&self) -> Result<()> { pub fn prune(&self) -> Result<()> {
@ -373,25 +453,11 @@ impl Processor {
self.img_ops self.img_ops
.par_iter() .par_iter()
.map(|(hash, op)| { .map(|(hash, op)| {
let target = self.resized_path.join(Self::op_filename(*hash, op.collision_id)); let target =
self.resized_path.join(Self::op_filename(*hash, op.collision_id, op.format));
op.perform(&self.content_path, &target) op.perform(&self.content_path, &target)
.chain_err(|| format!("Failed to process image: {}", op.source)) .map_err(|e| Error::chain(format!("Failed to process image: {}", op.source), e))
}) })
.collect::<Result<()>>() .collect::<Result<()>>()
} }
} }
/// Looks at file's extension and returns whether it's a supported image format
pub fn file_is_img<P: AsRef<Path>>(p: P) -> bool {
p.as_ref()
.extension()
.and_then(|s| s.to_str())
.map(|ext| match ext.to_lowercase().as_str() {
"jpg" | "jpeg" => true,
"png" => true,
"gif" => true,
"bmp" => true,
_ => false,
})
.unwrap_or(false)
}

View file

@ -7,7 +7,7 @@ authors = ["Vincent Prouillet <prouillet.vincent@gmail.com>"]
slotmap = "0.2" slotmap = "0.2"
rayon = "1" rayon = "1"
chrono = { version = "0.4", features = ["serde"] } chrono = { version = "0.4", features = ["serde"] }
tera = "0.11" tera = "1.0.0-alpha.3"
serde = "1" serde = "1"
serde_derive = "1" serde_derive = "1"
slug = "0.1" slug = "0.1"

View file

@ -1,5 +1,8 @@
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use config::Config;
use errors::Result;
/// Takes a full path to a file and returns only the components after the first `content` directory /// Takes a full path to a file and returns only the components after the first `content` directory
/// Will not return the filename as last component /// Will not return the filename as last component
pub fn find_content_components<P: AsRef<Path>>(path: P) -> Vec<String> { pub fn find_content_components<P: AsRef<Path>>(path: P) -> Vec<String> {
@ -28,7 +31,10 @@ pub fn find_content_components<P: AsRef<Path>>(path: P) -> Vec<String> {
pub struct FileInfo { pub struct FileInfo {
/// The full path to the .md file /// The full path to the .md file
pub path: PathBuf, pub path: PathBuf,
/// The on-disk filename, will differ from the `name` when there is a language code in it
pub filename: String,
/// The name of the .md file without the extension, always `_index` for sections /// The name of the .md file without the extension, always `_index` for sections
/// Doesn't contain the language if there was one in the filename
pub name: String, pub name: String,
/// The .md path, starting from the content directory, with `/` slashes /// The .md path, starting from the content directory, with `/` slashes
pub relative: String, pub relative: String,
@ -40,14 +46,19 @@ pub struct FileInfo {
/// For example a file at content/kb/solutions/blabla.md will have 2 components: /// For example a file at content/kb/solutions/blabla.md will have 2 components:
/// `kb` and `solutions` /// `kb` and `solutions`
pub components: Vec<String>, pub components: Vec<String>,
/// This is `parent` + `name`, used to find content referring to the same content but in
/// various languages.
pub canonical: PathBuf,
} }
impl FileInfo { impl FileInfo {
pub fn new_page(path: &Path) -> FileInfo { pub fn new_page(path: &Path, base_path: &PathBuf) -> FileInfo {
let file_path = path.to_path_buf(); let file_path = path.to_path_buf();
let mut parent = file_path.parent().unwrap().to_path_buf(); let mut parent = file_path.parent().expect("Get parent of page").to_path_buf();
let name = path.file_stem().unwrap().to_string_lossy().to_string(); let name = path.file_stem().unwrap().to_string_lossy().to_string();
let mut components = find_content_components(&file_path); let mut components = find_content_components(
&file_path.strip_prefix(base_path).expect("Strip base path prefix for page"),
);
let relative = if !components.is_empty() { let relative = if !components.is_empty() {
format!("{}/{}.md", components.join("/"), name) format!("{}/{}.md", components.join("/"), name)
} else { } else {
@ -55,16 +66,20 @@ impl FileInfo {
}; };
// If we have a folder with an asset, don't consider it as a component // If we have a folder with an asset, don't consider it as a component
if !components.is_empty() && name == "index" { // Splitting on `.` as we might have a language so it isn't *only* index but also index.fr
// etc
if !components.is_empty() && name.split('.').collect::<Vec<_>>()[0] == "index" {
components.pop(); components.pop();
// also set parent_path to grandparent instead // also set parent_path to grandparent instead
parent = parent.parent().unwrap().to_path_buf(); parent = parent.parent().unwrap().to_path_buf();
} }
FileInfo { FileInfo {
filename: file_path.file_name().unwrap().to_string_lossy().to_string(),
path: file_path, path: file_path,
// We don't care about grand parent for pages // We don't care about grand parent for pages
grand_parent: None, grand_parent: None,
canonical: parent.join(&name),
parent, parent,
name, name,
components, components,
@ -72,26 +87,61 @@ impl FileInfo {
} }
} }
pub fn new_section(path: &Path) -> FileInfo { pub fn new_section(path: &Path, base_path: &PathBuf) -> FileInfo {
let parent = path.parent().unwrap().to_path_buf(); let file_path = path.to_path_buf();
let components = find_content_components(path); let parent = path.parent().expect("Get parent of section").to_path_buf();
let relative = if components.is_empty() { let name = path.file_stem().unwrap().to_string_lossy().to_string();
// the index one let components = find_content_components(
"_index.md".to_string() &file_path.strip_prefix(base_path).expect("Strip base path prefix for section"),
);
let relative = if !components.is_empty() {
format!("{}/{}.md", components.join("/"), name)
} else { } else {
format!("{}/_index.md", components.join("/")) format!("{}.md", name)
}; };
let grand_parent = parent.parent().map(|p| p.to_path_buf()); let grand_parent = parent.parent().map(|p| p.to_path_buf());
FileInfo { FileInfo {
path: path.to_path_buf(), filename: file_path.file_name().unwrap().to_string_lossy().to_string(),
path: file_path,
canonical: parent.join(&name),
parent, parent,
grand_parent, grand_parent,
name: "_index".to_string(), name,
components, components,
relative, relative,
} }
} }
/// Look for a language in the filename.
/// If a language has been found, update the name of the file in this struct to
/// remove it and return the language code
pub fn find_language(&mut self, config: &Config) -> Result<String> {
// No languages? Nothing to do
if !config.is_multilingual() {
return Ok(config.default_language.clone());
}
if !self.name.contains('.') {
return Ok(config.default_language.clone());
}
// Go with the assumption that no one is using `.` in filenames when using i18n
// We can document that
let mut parts: Vec<String> = self.name.splitn(2, '.').map(|s| s.to_string()).collect();
// The language code is not present in the config: typo or the user forgot to add it to the
// config
if !config.languages_codes().contains(&parts[1].as_ref()) {
bail!("File {:?} has a language code of {} which isn't present in the config.toml `languages`", self.path, parts[1]);
}
self.name = parts.swap_remove(0);
self.canonical = self.parent.join(&self.name);
let lang = parts.swap_remove(0);
Ok(lang)
}
} }
#[doc(hidden)] #[doc(hidden)]
@ -101,16 +151,22 @@ impl Default for FileInfo {
path: PathBuf::new(), path: PathBuf::new(),
parent: PathBuf::new(), parent: PathBuf::new(),
grand_parent: None, grand_parent: None,
filename: String::new(),
name: String::new(), name: String::new(),
components: vec![], components: vec![],
relative: String::new(), relative: String::new(),
canonical: PathBuf::new(),
} }
} }
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::find_content_components; use std::path::{Path, PathBuf};
use config::{Config, Language};
use super::{find_content_components, FileInfo};
#[test] #[test]
fn can_find_content_components() { fn can_find_content_components() {
@ -118,4 +174,86 @@ mod tests {
find_content_components("/home/vincent/code/site/content/posts/tutorials/python.md"); find_content_components("/home/vincent/code/site/content/posts/tutorials/python.md");
assert_eq!(res, ["posts".to_string(), "tutorials".to_string()]); assert_eq!(res, ["posts".to_string(), "tutorials".to_string()]);
} }
#[test]
fn can_find_components_in_page_with_assets() {
let file = FileInfo::new_page(
&Path::new("/home/vincent/code/site/content/posts/tutorials/python/index.md"),
&PathBuf::new(),
);
assert_eq!(file.components, ["posts".to_string(), "tutorials".to_string()]);
}
#[test]
fn doesnt_fail_with_multiple_content_directories() {
let file = FileInfo::new_page(
&Path::new("/home/vincent/code/content/site/content/posts/tutorials/python/index.md"),
&PathBuf::from("/home/vincent/code/content/site"),
);
assert_eq!(file.components, ["posts".to_string(), "tutorials".to_string()]);
}
#[test]
fn can_find_valid_language_in_page() {
let mut config = Config::default();
config.languages.push(Language { code: String::from("fr"), rss: false });
let mut file = FileInfo::new_page(
&Path::new("/home/vincent/code/site/content/posts/tutorials/python.fr.md"),
&PathBuf::new(),
);
let res = file.find_language(&config);
assert!(res.is_ok());
assert_eq!(res.unwrap(), "fr");
}
#[test]
fn can_find_valid_language_in_page_with_assets() {
let mut config = Config::default();
config.languages.push(Language { code: String::from("fr"), rss: false });
let mut file = FileInfo::new_page(
&Path::new("/home/vincent/code/site/content/posts/tutorials/python/index.fr.md"),
&PathBuf::new(),
);
assert_eq!(file.components, ["posts".to_string(), "tutorials".to_string()]);
let res = file.find_language(&config);
assert!(res.is_ok());
assert_eq!(res.unwrap(), "fr");
}
#[test]
fn do_nothing_on_unknown_language_in_page_with_i18n_off() {
let config = Config::default();
let mut file = FileInfo::new_page(
&Path::new("/home/vincent/code/site/content/posts/tutorials/python.fr.md"),
&PathBuf::new(),
);
let res = file.find_language(&config);
assert!(res.is_ok());
assert_eq!(res.unwrap(), config.default_language);
}
#[test]
fn errors_on_unknown_language_in_page_with_i18n_on() {
let mut config = Config::default();
config.languages.push(Language { code: String::from("it"), rss: false });
let mut file = FileInfo::new_page(
&Path::new("/home/vincent/code/site/content/posts/tutorials/python.fr.md"),
&PathBuf::new(),
);
let res = file.find_language(&config);
assert!(res.is_err());
}
#[test]
fn can_find_valid_language_in_section() {
let mut config = Config::default();
config.languages.push(Language { code: String::from("fr"), rss: false });
let mut file = FileInfo::new_section(
&Path::new("/home/vincent/code/site/content/posts/tutorials/_index.fr.md"),
&PathBuf::new(),
);
let res = file.find_language(&config);
assert!(res.is_ok());
assert_eq!(res.unwrap(), "fr");
}
} }

View file

@ -8,7 +8,7 @@ use slug::slugify;
use tera::{Context as TeraContext, Tera}; use tera::{Context as TeraContext, Tera};
use config::Config; use config::Config;
use errors::{Result, ResultExt}; use errors::{Error, Result};
use front_matter::{split_page_content, InsertAnchor, PageFrontMatter}; use front_matter::{split_page_content, InsertAnchor, PageFrontMatter};
use library::Library; use library::Library;
use rendering::{render_content, Header, RenderContext}; use rendering::{render_content, Header, RenderContext};
@ -71,14 +71,19 @@ pub struct Page {
/// How long would it take to read the raw content. /// How long would it take to read the raw content.
/// See `get_reading_analytics` on how it is calculated /// See `get_reading_analytics` on how it is calculated
pub reading_time: Option<usize>, pub reading_time: Option<usize>,
/// The language of that page. Equal to the default lang if the user doesn't setup `languages` in config.
/// Corresponds to the lang in the {slug}.{lang}.md file scheme
pub lang: String,
/// Contains all the translated version of that page
pub translations: Vec<Key>,
} }
impl Page { impl Page {
pub fn new<P: AsRef<Path>>(file_path: P, meta: PageFrontMatter) -> Page { pub fn new<P: AsRef<Path>>(file_path: P, meta: PageFrontMatter, base_path: &PathBuf) -> Page {
let file_path = file_path.as_ref(); let file_path = file_path.as_ref();
Page { Page {
file: FileInfo::new_page(file_path), file: FileInfo::new_page(file_path, base_path),
meta, meta,
ancestors: vec![], ancestors: vec![],
raw_content: "".to_string(), raw_content: "".to_string(),
@ -97,6 +102,8 @@ impl Page {
toc: vec![], toc: vec![],
word_count: None, word_count: None,
reading_time: None, reading_time: None,
lang: String::new(),
translations: Vec::new(),
} }
} }
@ -107,9 +114,16 @@ impl Page {
/// Parse a page given the content of the .md file /// Parse a page given the content of the .md file
/// Files without front matter or with invalid front matter are considered /// Files without front matter or with invalid front matter are considered
/// erroneous /// erroneous
pub fn parse(file_path: &Path, content: &str, config: &Config) -> Result<Page> { pub fn parse(
file_path: &Path,
content: &str,
config: &Config,
base_path: &PathBuf,
) -> Result<Page> {
let (meta, content) = split_page_content(file_path, content)?; let (meta, content) = split_page_content(file_path, content)?;
let mut page = Page::new(file_path, meta); let mut page = Page::new(file_path, meta, base_path);
page.lang = page.file.find_language(config)?;
page.raw_content = content; page.raw_content = content;
let (word_count, reading_time) = get_reading_analytics(&page.raw_content); let (word_count, reading_time) = get_reading_analytics(&page.raw_content);
@ -117,7 +131,16 @@ impl Page {
page.reading_time = Some(reading_time); page.reading_time = Some(reading_time);
let mut slug_from_dated_filename = None; let mut slug_from_dated_filename = None;
if let Some(ref caps) = RFC3339_DATE.captures(&page.file.name.replace(".md", "")) { let file_path = if page.file.name == "index" {
if let Some(parent) = page.file.path.parent() {
parent.file_name().unwrap().to_str().unwrap().to_string()
} else {
page.file.name.replace(".md", "")
}
} else {
page.file.name.replace(".md", "")
};
if let Some(ref caps) = RFC3339_DATE.captures(&file_path) {
slug_from_dated_filename = Some(caps.name("slug").unwrap().as_str().to_string()); slug_from_dated_filename = Some(caps.name("slug").unwrap().as_str().to_string());
if page.meta.date.is_none() { if page.meta.date.is_none() {
page.meta.date = Some(caps.name("datetime").unwrap().as_str().to_string()); page.meta.date = Some(caps.name("datetime").unwrap().as_str().to_string());
@ -130,7 +153,11 @@ impl Page {
slug.trim().to_string() slug.trim().to_string()
} else if page.file.name == "index" { } else if page.file.name == "index" {
if let Some(parent) = page.file.path.parent() { if let Some(parent) = page.file.path.parent() {
if let Some(slug) = slug_from_dated_filename {
slugify(&slug)
} else {
slugify(parent.file_name().unwrap().to_str().unwrap()) slugify(parent.file_name().unwrap().to_str().unwrap())
}
} else { } else {
slugify(&page.file.name) slugify(&page.file.name)
} }
@ -144,13 +171,19 @@ impl Page {
}; };
if let Some(ref p) = page.meta.path { if let Some(ref p) = page.meta.path {
page.path = p.trim().trim_left_matches('/').to_string(); page.path = p.trim().trim_start_matches('/').to_string();
} else { } else {
page.path = if page.file.components.is_empty() { let mut path = if page.file.components.is_empty() {
page.slug.clone() page.slug.clone()
} else { } else {
format!("{}/{}", page.file.components.join("/"), page.slug) format!("{}/{}", page.file.components.join("/"), page.slug)
}; };
if page.lang != config.default_language {
path = format!("{}/{}", page.lang, path);
}
page.path = path;
} }
if !page.path.ends_with('/') { if !page.path.ends_with('/') {
page.path = format!("{}/", page.path); page.path = format!("{}/", page.path);
@ -168,10 +201,14 @@ impl Page {
} }
/// Read and parse a .md file into a Page struct /// Read and parse a .md file into a Page struct
pub fn from_file<P: AsRef<Path>>(path: P, config: &Config) -> Result<Page> { pub fn from_file<P: AsRef<Path>>(
path: P,
config: &Config,
base_path: &PathBuf,
) -> Result<Page> {
let path = path.as_ref(); let path = path.as_ref();
let content = read_file(path)?; let content = read_file(path)?;
let mut page = Page::parse(path, &content, config)?; let mut page = Page::parse(path, &content, config, base_path)?;
if page.file.name == "index" { if page.file.name == "index" {
let parent_dir = path.parent().unwrap(); let parent_dir = path.parent().unwrap();
@ -218,8 +255,9 @@ impl Page {
context.tera_context.insert("page", &SerializingPage::from_page_basic(self, None)); context.tera_context.insert("page", &SerializingPage::from_page_basic(self, None));
let res = render_content(&self.raw_content, &context) let res = render_content(&self.raw_content, &context).map_err(|e| {
.chain_err(|| format!("Failed to render content of {}", self.file.path.display()))?; Error::chain(format!("Failed to render content of {}", self.file.path.display()), e)
})?;
self.summary = res.summary_len.map(|l| res.body[0..l].to_owned()); self.summary = res.summary_len.map(|l| res.body[0..l].to_owned());
self.content = res.body; self.content = res.body;
@ -240,9 +278,12 @@ impl Page {
context.insert("current_url", &self.permalink); context.insert("current_url", &self.permalink);
context.insert("current_path", &self.path); context.insert("current_path", &self.path);
context.insert("page", &self.to_serialized(library)); context.insert("page", &self.to_serialized(library));
context.insert("lang", &self.lang);
context.insert("toc", &self.toc);
render_template(&tpl_name, tera, &context, &config.theme) render_template(&tpl_name, tera, context, &config.theme).map_err(|e| {
.chain_err(|| format!("Failed to render page '{}'", self.file.path.display())) Error::chain(format!("Failed to render page '{}'", self.file.path.display()), e)
})
} }
/// Creates a vectors of asset URLs. /// Creates a vectors of asset URLs.
@ -286,6 +327,8 @@ impl Default for Page {
toc: vec![], toc: vec![],
word_count: None, word_count: None,
reading_time: None, reading_time: None,
lang: String::new(),
translations: Vec::new(),
} }
} }
} }
@ -295,14 +338,14 @@ mod tests {
use std::collections::HashMap; use std::collections::HashMap;
use std::fs::{create_dir, File}; use std::fs::{create_dir, File};
use std::io::Write; use std::io::Write;
use std::path::Path; use std::path::{Path, PathBuf};
use globset::{Glob, GlobSetBuilder}; use globset::{Glob, GlobSetBuilder};
use tempfile::tempdir; use tempfile::tempdir;
use tera::Tera; use tera::Tera;
use super::Page; use super::Page;
use config::Config; use config::{Config, Language};
use front_matter::InsertAnchor; use front_matter::InsertAnchor;
#[test] #[test]
@ -314,7 +357,7 @@ description = "hey there"
slug = "hello-world" slug = "hello-world"
+++ +++
Hello world"#; Hello world"#;
let res = Page::parse(Path::new("post.md"), content, &Config::default()); let res = Page::parse(Path::new("post.md"), content, &Config::default(), &PathBuf::new());
assert!(res.is_ok()); assert!(res.is_ok());
let mut page = res.unwrap(); let mut page = res.unwrap();
page.render_markdown( page.render_markdown(
@ -340,7 +383,8 @@ Hello world"#;
Hello world"#; Hello world"#;
let mut conf = Config::default(); let mut conf = Config::default();
conf.base_url = "http://hello.com/".to_string(); conf.base_url = "http://hello.com/".to_string();
let res = Page::parse(Path::new("content/posts/intro/start.md"), content, &conf); let res =
Page::parse(Path::new("content/posts/intro/start.md"), content, &conf, &PathBuf::new());
assert!(res.is_ok()); assert!(res.is_ok());
let page = res.unwrap(); let page = res.unwrap();
assert_eq!(page.path, "posts/intro/hello-world/"); assert_eq!(page.path, "posts/intro/hello-world/");
@ -356,7 +400,7 @@ Hello world"#;
+++ +++
Hello world"#; Hello world"#;
let config = Config::default(); let config = Config::default();
let res = Page::parse(Path::new("start.md"), content, &config); let res = Page::parse(Path::new("start.md"), content, &config, &PathBuf::new());
assert!(res.is_ok()); assert!(res.is_ok());
let page = res.unwrap(); let page = res.unwrap();
assert_eq!(page.path, "hello-world/"); assert_eq!(page.path, "hello-world/");
@ -372,7 +416,12 @@ Hello world"#;
+++ +++
Hello world"#; Hello world"#;
let config = Config::default(); let config = Config::default();
let res = Page::parse(Path::new("content/posts/intro/start.md"), content, &config); let res = Page::parse(
Path::new("content/posts/intro/start.md"),
content,
&config,
&PathBuf::new(),
);
assert!(res.is_ok()); assert!(res.is_ok());
let page = res.unwrap(); let page = res.unwrap();
assert_eq!(page.path, "hello-world/"); assert_eq!(page.path, "hello-world/");
@ -388,7 +437,12 @@ Hello world"#;
+++ +++
Hello world"#; Hello world"#;
let config = Config::default(); let config = Config::default();
let res = Page::parse(Path::new("content/posts/intro/start.md"), content, &config); let res = Page::parse(
Path::new("content/posts/intro/start.md"),
content,
&config,
&PathBuf::new(),
);
assert!(res.is_ok()); assert!(res.is_ok());
let page = res.unwrap(); let page = res.unwrap();
assert_eq!(page.path, "hello-world/"); assert_eq!(page.path, "hello-world/");
@ -404,14 +458,15 @@ Hello world"#;
slug = "hello-world" slug = "hello-world"
+++ +++
Hello world"#; Hello world"#;
let res = Page::parse(Path::new("start.md"), content, &Config::default()); let res = Page::parse(Path::new("start.md"), content, &Config::default(), &PathBuf::new());
assert!(res.is_err()); assert!(res.is_err());
} }
#[test] #[test]
fn can_make_slug_from_non_slug_filename() { fn can_make_slug_from_non_slug_filename() {
let config = Config::default(); let config = Config::default();
let res = Page::parse(Path::new(" file with space.md"), "+++\n+++", &config); let res =
Page::parse(Path::new(" file with space.md"), "+++\n+++", &config, &PathBuf::new());
assert!(res.is_ok()); assert!(res.is_ok());
let page = res.unwrap(); let page = res.unwrap();
assert_eq!(page.slug, "file-with-space"); assert_eq!(page.slug, "file-with-space");
@ -427,7 +482,7 @@ Hello world"#;
Hello world Hello world
<!-- more -->"# <!-- more -->"#
.to_string(); .to_string();
let res = Page::parse(Path::new("hello.md"), &content, &config); let res = Page::parse(Path::new("hello.md"), &content, &config, &PathBuf::new());
assert!(res.is_ok()); assert!(res.is_ok());
let mut page = res.unwrap(); let mut page = res.unwrap();
page.render_markdown(&HashMap::default(), &Tera::default(), &config, InsertAnchor::None) page.render_markdown(&HashMap::default(), &Tera::default(), &config, InsertAnchor::None)
@ -449,7 +504,11 @@ Hello world
File::create(nested_path.join("graph.jpg")).unwrap(); File::create(nested_path.join("graph.jpg")).unwrap();
File::create(nested_path.join("fail.png")).unwrap(); File::create(nested_path.join("fail.png")).unwrap();
let res = Page::from_file(nested_path.join("index.md").as_path(), &Config::default()); let res = Page::from_file(
nested_path.join("index.md").as_path(),
&Config::default(),
&PathBuf::new(),
);
assert!(res.is_ok()); assert!(res.is_ok());
let page = res.unwrap(); let page = res.unwrap();
assert_eq!(page.file.parent, path.join("content").join("posts")); assert_eq!(page.file.parent, path.join("content").join("posts"));
@ -472,7 +531,11 @@ Hello world
File::create(nested_path.join("graph.jpg")).unwrap(); File::create(nested_path.join("graph.jpg")).unwrap();
File::create(nested_path.join("fail.png")).unwrap(); File::create(nested_path.join("fail.png")).unwrap();
let res = Page::from_file(nested_path.join("index.md").as_path(), &Config::default()); let res = Page::from_file(
nested_path.join("index.md").as_path(),
&Config::default(),
&PathBuf::new(),
);
assert!(res.is_ok()); assert!(res.is_ok());
let page = res.unwrap(); let page = res.unwrap();
assert_eq!(page.file.parent, path.join("content").join("posts")); assert_eq!(page.file.parent, path.join("content").join("posts"));
@ -481,6 +544,35 @@ Hello world
assert_eq!(page.permalink, "http://a-website.com/posts/hey/"); assert_eq!(page.permalink, "http://a-website.com/posts/hey/");
} }
// https://github.com/getzola/zola/issues/607
#[test]
fn page_with_assets_and_date_in_folder_name() {
let tmp_dir = tempdir().expect("create temp dir");
let path = tmp_dir.path();
create_dir(&path.join("content")).expect("create content temp dir");
create_dir(&path.join("content").join("posts")).expect("create posts temp dir");
let nested_path = path.join("content").join("posts").join("2013-06-02_with-assets");
create_dir(&nested_path).expect("create nested temp dir");
let mut f = File::create(nested_path.join("index.md")).unwrap();
f.write_all(b"+++\n\n+++\n").unwrap();
File::create(nested_path.join("example.js")).unwrap();
File::create(nested_path.join("graph.jpg")).unwrap();
File::create(nested_path.join("fail.png")).unwrap();
let res = Page::from_file(
nested_path.join("index.md").as_path(),
&Config::default(),
&PathBuf::new(),
);
assert!(res.is_ok());
let page = res.unwrap();
assert_eq!(page.file.parent, path.join("content").join("posts"));
assert_eq!(page.slug, "with-assets");
assert_eq!(page.meta.date, Some("2013-06-02".to_string()));
assert_eq!(page.assets.len(), 3);
assert_eq!(page.permalink, "http://a-website.com/posts/with-assets/");
}
#[test] #[test]
fn page_with_ignored_assets_filters_out_correct_files() { fn page_with_ignored_assets_filters_out_correct_files() {
let tmp_dir = tempdir().expect("create temp dir"); let tmp_dir = tempdir().expect("create temp dir");
@ -500,7 +592,7 @@ Hello world
let mut config = Config::default(); let mut config = Config::default();
config.ignored_content_globset = Some(gsb.build().unwrap()); config.ignored_content_globset = Some(gsb.build().unwrap());
let res = Page::from_file(nested_path.join("index.md").as_path(), &config); let res = Page::from_file(nested_path.join("index.md").as_path(), &config, &PathBuf::new());
assert!(res.is_ok()); assert!(res.is_ok());
let page = res.unwrap(); let page = res.unwrap();
@ -517,7 +609,7 @@ Hello world
Hello world Hello world
<!-- more -->"# <!-- more -->"#
.to_string(); .to_string();
let res = Page::parse(Path::new("2018-10-08_hello.md"), &content, &config); let res = Page::parse(Path::new("2018-10-08_hello.md"), &content, &config, &PathBuf::new());
assert!(res.is_ok()); assert!(res.is_ok());
let page = res.unwrap(); let page = res.unwrap();
@ -534,7 +626,12 @@ Hello world
Hello world Hello world
<!-- more -->"# <!-- more -->"#
.to_string(); .to_string();
let res = Page::parse(Path::new("2018-10-02T15:00:00Z-hello.md"), &content, &config); let res = Page::parse(
Path::new("2018-10-02T15:00:00Z-hello.md"),
&content,
&config,
&PathBuf::new(),
);
assert!(res.is_ok()); assert!(res.is_ok());
let page = res.unwrap(); let page = res.unwrap();
@ -552,11 +649,65 @@ date = 2018-09-09
Hello world Hello world
<!-- more -->"# <!-- more -->"#
.to_string(); .to_string();
let res = Page::parse(Path::new("2018-10-08_hello.md"), &content, &config); let res = Page::parse(Path::new("2018-10-08_hello.md"), &content, &config, &PathBuf::new());
assert!(res.is_ok()); assert!(res.is_ok());
let page = res.unwrap(); let page = res.unwrap();
assert_eq!(page.meta.date, Some("2018-09-09".to_string())); assert_eq!(page.meta.date, Some("2018-09-09".to_string()));
assert_eq!(page.slug, "hello"); assert_eq!(page.slug, "hello");
} }
#[test]
fn can_specify_language_in_filename() {
let mut config = Config::default();
config.languages.push(Language { code: String::from("fr"), rss: false });
let content = r#"
+++
+++
Bonjour le monde"#
.to_string();
let res = Page::parse(Path::new("hello.fr.md"), &content, &config, &PathBuf::new());
assert!(res.is_ok());
let page = res.unwrap();
assert_eq!(page.lang, "fr".to_string());
assert_eq!(page.slug, "hello");
assert_eq!(page.permalink, "http://a-website.com/fr/hello/");
}
#[test]
fn can_specify_language_in_filename_with_date() {
let mut config = Config::default();
config.languages.push(Language { code: String::from("fr"), rss: false });
let content = r#"
+++
+++
Bonjour le monde"#
.to_string();
let res =
Page::parse(Path::new("2018-10-08_hello.fr.md"), &content, &config, &PathBuf::new());
assert!(res.is_ok());
let page = res.unwrap();
assert_eq!(page.meta.date, Some("2018-10-08".to_string()));
assert_eq!(page.lang, "fr".to_string());
assert_eq!(page.slug, "hello");
assert_eq!(page.permalink, "http://a-website.com/fr/hello/");
}
#[test]
fn i18n_frontmatter_path_overrides_default_permalink() {
let mut config = Config::default();
config.languages.push(Language { code: String::from("fr"), rss: false });
let content = r#"
+++
path = "bonjour"
+++
Bonjour le monde"#
.to_string();
let res = Page::parse(Path::new("hello.fr.md"), &content, &config, &PathBuf::new());
assert!(res.is_ok());
let page = res.unwrap();
assert_eq!(page.lang, "fr".to_string());
assert_eq!(page.slug, "hello");
assert_eq!(page.permalink, "http://a-website.com/bonjour/");
}
} }

View file

@ -5,7 +5,7 @@ use slotmap::Key;
use tera::{Context as TeraContext, Tera}; use tera::{Context as TeraContext, Tera};
use config::Config; use config::Config;
use errors::{Result, ResultExt}; use errors::{Error, Result};
use front_matter::{split_section_content, SectionFrontMatter}; use front_matter::{split_section_content, SectionFrontMatter};
use rendering::{render_content, Header, RenderContext}; use rendering::{render_content, Header, RenderContext};
use utils::fs::{find_related_assets, read_file}; use utils::fs::{find_related_assets, read_file};
@ -51,14 +51,23 @@ pub struct Section {
/// How long would it take to read the raw content. /// How long would it take to read the raw content.
/// See `get_reading_analytics` on how it is calculated /// See `get_reading_analytics` on how it is calculated
pub reading_time: Option<usize>, pub reading_time: Option<usize>,
/// The language of that section. Equal to the default lang if the user doesn't setup `languages` in config.
/// Corresponds to the lang in the _index.{lang}.md file scheme
pub lang: String,
/// Contains all the translated version of that section
pub translations: Vec<Key>,
} }
impl Section { impl Section {
pub fn new<P: AsRef<Path>>(file_path: P, meta: SectionFrontMatter) -> Section { pub fn new<P: AsRef<Path>>(
file_path: P,
meta: SectionFrontMatter,
base_path: &PathBuf,
) -> Section {
let file_path = file_path.as_ref(); let file_path = file_path.as_ref();
Section { Section {
file: FileInfo::new_section(file_path), file: FileInfo::new_section(file_path, base_path),
meta, meta,
ancestors: vec![], ancestors: vec![],
path: "".to_string(), path: "".to_string(),
@ -74,17 +83,30 @@ impl Section {
toc: vec![], toc: vec![],
word_count: None, word_count: None,
reading_time: None, reading_time: None,
lang: String::new(),
translations: Vec::new(),
} }
} }
pub fn parse(file_path: &Path, content: &str, config: &Config) -> Result<Section> { pub fn parse(
file_path: &Path,
content: &str,
config: &Config,
base_path: &PathBuf,
) -> Result<Section> {
let (meta, content) = split_section_content(file_path, content)?; let (meta, content) = split_section_content(file_path, content)?;
let mut section = Section::new(file_path, meta); let mut section = Section::new(file_path, meta, base_path);
section.lang = section.file.find_language(config)?;
section.raw_content = content; section.raw_content = content;
let (word_count, reading_time) = get_reading_analytics(&section.raw_content); let (word_count, reading_time) = get_reading_analytics(&section.raw_content);
section.word_count = Some(word_count); section.word_count = Some(word_count);
section.reading_time = Some(reading_time); section.reading_time = Some(reading_time);
section.path = format!("{}/", section.file.components.join("/")); let path = section.file.components.join("/");
if section.lang != config.default_language {
section.path = format!("{}/{}", section.lang, path);
} else {
section.path = format!("{}/", path);
}
section.components = section section.components = section
.path .path
.split('/') .split('/')
@ -96,10 +118,14 @@ impl Section {
} }
/// Read and parse a .md file into a Page struct /// Read and parse a .md file into a Page struct
pub fn from_file<P: AsRef<Path>>(path: P, config: &Config) -> Result<Section> { pub fn from_file<P: AsRef<Path>>(
path: P,
config: &Config,
base_path: &PathBuf,
) -> Result<Section> {
let path = path.as_ref(); let path = path.as_ref();
let content = read_file(path)?; let content = read_file(path)?;
let mut section = Section::parse(path, &content, config)?; let mut section = Section::parse(path, &content, config, base_path)?;
let parent_dir = path.parent().unwrap(); let parent_dir = path.parent().unwrap();
let assets = find_related_assets(parent_dir); let assets = find_related_assets(parent_dir);
@ -158,8 +184,9 @@ impl Section {
context.tera_context.insert("section", &SerializingSection::from_section_basic(self, None)); context.tera_context.insert("section", &SerializingSection::from_section_basic(self, None));
let res = render_content(&self.raw_content, &context) let res = render_content(&self.raw_content, &context).map_err(|e| {
.chain_err(|| format!("Failed to render content of {}", self.file.path.display()))?; Error::chain(format!("Failed to render content of {}", self.file.path.display()), e)
})?;
self.content = res.body; self.content = res.body;
self.toc = res.toc; self.toc = res.toc;
Ok(()) Ok(())
@ -174,9 +201,12 @@ impl Section {
context.insert("current_url", &self.permalink); context.insert("current_url", &self.permalink);
context.insert("current_path", &self.path); context.insert("current_path", &self.path);
context.insert("section", &self.to_serialized(library)); context.insert("section", &self.to_serialized(library));
context.insert("lang", &self.lang);
context.insert("toc", &self.toc);
render_template(tpl_name, tera, &context, &config.theme) render_template(tpl_name, tera, context, &config.theme).map_err(|e| {
.chain_err(|| format!("Failed to render section '{}'", self.file.path.display())) Error::chain(format!("Failed to render section '{}'", self.file.path.display()), e)
})
} }
/// Is this the index section? /// Is this the index section?
@ -223,6 +253,8 @@ impl Default for Section {
toc: vec![], toc: vec![],
reading_time: None, reading_time: None,
word_count: None, word_count: None,
lang: String::new(),
translations: Vec::new(),
} }
} }
} }
@ -231,12 +263,13 @@ impl Default for Section {
mod tests { mod tests {
use std::fs::{create_dir, File}; use std::fs::{create_dir, File};
use std::io::Write; use std::io::Write;
use std::path::{Path, PathBuf};
use globset::{Glob, GlobSetBuilder}; use globset::{Glob, GlobSetBuilder};
use tempfile::tempdir; use tempfile::tempdir;
use super::Section; use super::Section;
use config::Config; use config::{Config, Language};
#[test] #[test]
fn section_with_assets_gets_right_info() { fn section_with_assets_gets_right_info() {
@ -252,7 +285,11 @@ mod tests {
File::create(nested_path.join("graph.jpg")).unwrap(); File::create(nested_path.join("graph.jpg")).unwrap();
File::create(nested_path.join("fail.png")).unwrap(); File::create(nested_path.join("fail.png")).unwrap();
let res = Section::from_file(nested_path.join("_index.md").as_path(), &Config::default()); let res = Section::from_file(
nested_path.join("_index.md").as_path(),
&Config::default(),
&PathBuf::new(),
);
assert!(res.is_ok()); assert!(res.is_ok());
let section = res.unwrap(); let section = res.unwrap();
assert_eq!(section.assets.len(), 3); assert_eq!(section.assets.len(), 3);
@ -278,11 +315,51 @@ mod tests {
let mut config = Config::default(); let mut config = Config::default();
config.ignored_content_globset = Some(gsb.build().unwrap()); config.ignored_content_globset = Some(gsb.build().unwrap());
let res = Section::from_file(nested_path.join("_index.md").as_path(), &config); let res =
Section::from_file(nested_path.join("_index.md").as_path(), &config, &PathBuf::new());
assert!(res.is_ok()); assert!(res.is_ok());
let page = res.unwrap(); let page = res.unwrap();
assert_eq!(page.assets.len(), 1); assert_eq!(page.assets.len(), 1);
assert_eq!(page.assets[0].file_name().unwrap().to_str(), Some("graph.jpg")); assert_eq!(page.assets[0].file_name().unwrap().to_str(), Some("graph.jpg"));
} }
#[test]
fn can_specify_language_in_filename() {
let mut config = Config::default();
config.languages.push(Language { code: String::from("fr"), rss: false });
let content = r#"
+++
+++
Bonjour le monde"#
.to_string();
let res = Section::parse(
Path::new("content/hello/nested/_index.fr.md"),
&content,
&config,
&PathBuf::new(),
);
assert!(res.is_ok());
let section = res.unwrap();
assert_eq!(section.lang, "fr".to_string());
assert_eq!(section.permalink, "http://a-website.com/fr/hello/nested/");
}
// https://zola.discourse.group/t/rfc-i18n/13/17?u=keats
#[test]
fn can_make_links_to_translated_sections_without_double_trailing_slash() {
let mut config = Config::default();
config.languages.push(Language { code: String::from("fr"), rss: false });
let content = r#"
+++
+++
Bonjour le monde"#
.to_string();
let res =
Section::parse(Path::new("content/_index.fr.md"), &content, &config, &PathBuf::new());
assert!(res.is_ok());
let section = res.unwrap();
assert_eq!(section.lang, "fr".to_string());
assert_eq!(section.permalink, "http://a-website.com/fr/");
}
} }

View file

@ -5,7 +5,46 @@ use tera::{Map, Value};
use content::{Page, Section}; use content::{Page, Section};
use library::Library; use library::Library;
use rendering::Header;
#[derive(Clone, Debug, PartialEq, Serialize)]
pub struct TranslatedContent<'a> {
lang: &'a str,
permalink: &'a str,
title: &'a Option<String>,
}
impl<'a> TranslatedContent<'a> {
// copypaste eh, not worth creating an enum imo
pub fn find_all_sections(section: &'a Section, library: &'a Library) -> Vec<Self> {
let mut translations = vec![];
for key in &section.translations {
let other = library.get_section_by_key(*key);
translations.push(TranslatedContent {
lang: &other.lang,
permalink: &other.permalink,
title: &other.meta.title,
});
}
translations
}
pub fn find_all_pages(page: &'a Page, library: &'a Library) -> Vec<Self> {
let mut translations = vec![];
for key in &page.translations {
let other = library.get_page_by_key(*key);
translations.push(TranslatedContent {
lang: &other.lang,
permalink: &other.permalink,
title: &other.meta.title,
});
}
translations
}
}
#[derive(Clone, Debug, PartialEq, Serialize)] #[derive(Clone, Debug, PartialEq, Serialize)]
pub struct SerializingPage<'a> { pub struct SerializingPage<'a> {
@ -27,13 +66,14 @@ pub struct SerializingPage<'a> {
summary: &'a Option<String>, summary: &'a Option<String>,
word_count: Option<usize>, word_count: Option<usize>,
reading_time: Option<usize>, reading_time: Option<usize>,
toc: &'a [Header],
assets: &'a [String], assets: &'a [String],
draft: bool, draft: bool,
lang: &'a str,
lighter: Option<Box<SerializingPage<'a>>>, lighter: Option<Box<SerializingPage<'a>>>,
heavier: Option<Box<SerializingPage<'a>>>, heavier: Option<Box<SerializingPage<'a>>>,
earlier: Option<Box<SerializingPage<'a>>>, earlier: Option<Box<SerializingPage<'a>>>,
later: Option<Box<SerializingPage<'a>>>, later: Option<Box<SerializingPage<'a>>>,
translations: Vec<TranslatedContent<'a>>,
} }
impl<'a> SerializingPage<'a> { impl<'a> SerializingPage<'a> {
@ -66,6 +106,8 @@ impl<'a> SerializingPage<'a> {
.map(|k| library.get_section_by_key(*k).file.relative.clone()) .map(|k| library.get_section_by_key(*k).file.relative.clone())
.collect(); .collect();
let translations = TranslatedContent::find_all_pages(page, library);
SerializingPage { SerializingPage {
relative_path: &page.file.relative, relative_path: &page.file.relative,
ancestors, ancestors,
@ -85,13 +127,14 @@ impl<'a> SerializingPage<'a> {
summary: &page.summary, summary: &page.summary,
word_count: page.word_count, word_count: page.word_count,
reading_time: page.reading_time, reading_time: page.reading_time,
toc: &page.toc,
assets: &page.serialized_assets, assets: &page.serialized_assets,
draft: page.is_draft(), draft: page.is_draft(),
lang: &page.lang,
lighter, lighter,
heavier, heavier,
earlier, earlier,
later, later,
translations,
} }
} }
@ -114,6 +157,12 @@ impl<'a> SerializingPage<'a> {
vec![] vec![]
}; };
let translations = if let Some(ref lib) = library {
TranslatedContent::find_all_pages(page, lib)
} else {
vec![]
};
SerializingPage { SerializingPage {
relative_path: &page.file.relative, relative_path: &page.file.relative,
ancestors, ancestors,
@ -133,13 +182,14 @@ impl<'a> SerializingPage<'a> {
summary: &page.summary, summary: &page.summary,
word_count: page.word_count, word_count: page.word_count,
reading_time: page.reading_time, reading_time: page.reading_time,
toc: &page.toc,
assets: &page.serialized_assets, assets: &page.serialized_assets,
draft: page.is_draft(), draft: page.is_draft(),
lang: &page.lang,
lighter: None, lighter: None,
heavier: None, heavier: None,
earlier: None, earlier: None,
later: None, later: None,
translations,
} }
} }
} }
@ -157,10 +207,11 @@ pub struct SerializingSection<'a> {
components: &'a [String], components: &'a [String],
word_count: Option<usize>, word_count: Option<usize>,
reading_time: Option<usize>, reading_time: Option<usize>,
toc: &'a [Header], lang: &'a str,
assets: &'a [String], assets: &'a [String],
pages: Vec<SerializingPage<'a>>, pages: Vec<SerializingPage<'a>>,
subsections: Vec<&'a str>, subsections: Vec<&'a str>,
translations: Vec<TranslatedContent<'a>>,
} }
impl<'a> SerializingSection<'a> { impl<'a> SerializingSection<'a> {
@ -169,7 +220,7 @@ impl<'a> SerializingSection<'a> {
let mut subsections = Vec::with_capacity(section.subsections.len()); let mut subsections = Vec::with_capacity(section.subsections.len());
for k in &section.pages { for k in &section.pages {
pages.push(library.get_page_by_key(*k).to_serialized(library)); pages.push(library.get_page_by_key(*k).to_serialized_basic(library));
} }
for k in &section.subsections { for k in &section.subsections {
@ -181,6 +232,7 @@ impl<'a> SerializingSection<'a> {
.iter() .iter()
.map(|k| library.get_section_by_key(*k).file.relative.clone()) .map(|k| library.get_section_by_key(*k).file.relative.clone())
.collect(); .collect();
let translations = TranslatedContent::find_all_sections(section, library);
SerializingSection { SerializingSection {
relative_path: &section.file.relative, relative_path: &section.file.relative,
@ -194,10 +246,11 @@ impl<'a> SerializingSection<'a> {
components: &section.components, components: &section.components,
word_count: section.word_count, word_count: section.word_count,
reading_time: section.reading_time, reading_time: section.reading_time,
toc: &section.toc,
assets: &section.serialized_assets, assets: &section.serialized_assets,
lang: &section.lang,
pages, pages,
subsections, subsections,
translations,
} }
} }
@ -213,6 +266,12 @@ impl<'a> SerializingSection<'a> {
vec![] vec![]
}; };
let translations = if let Some(ref lib) = library {
TranslatedContent::find_all_sections(section, lib)
} else {
vec![]
};
SerializingSection { SerializingSection {
relative_path: &section.file.relative, relative_path: &section.file.relative,
ancestors, ancestors,
@ -225,10 +284,11 @@ impl<'a> SerializingSection<'a> {
components: &section.components, components: &section.components,
word_count: section.word_count, word_count: section.word_count,
reading_time: section.reading_time, reading_time: section.reading_time,
toc: &section.toc,
assets: &section.serialized_assets, assets: &section.serialized_assets,
lang: &section.lang,
pages: vec![], pages: vec![],
subsections: vec![], subsections: vec![],
translations,
} }
} }
} }

View file

@ -5,6 +5,7 @@ use slotmap::{DenseSlotMap, Key};
use front_matter::SortBy; use front_matter::SortBy;
use config::Config;
use content::{Page, Section}; use content::{Page, Section};
use sorting::{find_siblings, sort_pages_by_date, sort_pages_by_weight}; use sorting::{find_siblings, sort_pages_by_date, sort_pages_by_weight};
@ -22,18 +23,21 @@ pub struct Library {
/// All the sections of the site /// All the sections of the site
sections: DenseSlotMap<Section>, sections: DenseSlotMap<Section>,
/// A mapping path -> key for pages so we can easily get their key /// A mapping path -> key for pages so we can easily get their key
paths_to_pages: HashMap<PathBuf, Key>, pub paths_to_pages: HashMap<PathBuf, Key>,
/// A mapping path -> key for sections so we can easily get their key /// A mapping path -> key for sections so we can easily get their key
pub paths_to_sections: HashMap<PathBuf, Key>, pub paths_to_sections: HashMap<PathBuf, Key>,
/// Whether we need to look for translations
is_multilingual: bool,
} }
impl Library { impl Library {
pub fn new(cap_pages: usize, cap_sections: usize) -> Self { pub fn new(cap_pages: usize, cap_sections: usize, is_multilingual: bool) -> Self {
Library { Library {
pages: DenseSlotMap::with_capacity(cap_pages), pages: DenseSlotMap::with_capacity(cap_pages),
sections: DenseSlotMap::with_capacity(cap_sections), sections: DenseSlotMap::with_capacity(cap_sections),
paths_to_pages: HashMap::with_capacity(cap_pages), paths_to_pages: HashMap::with_capacity(cap_pages),
paths_to_sections: HashMap::with_capacity(cap_sections), paths_to_sections: HashMap::with_capacity(cap_sections),
is_multilingual,
} }
} }
@ -79,15 +83,9 @@ impl Library {
/// Find out the direct subsections of each subsection if there are some /// Find out the direct subsections of each subsection if there are some
/// as well as the pages for each section /// as well as the pages for each section
pub fn populate_sections(&mut self) { pub fn populate_sections(&mut self, config: &Config) {
let (root_path, index_path) = self let root_path =
.sections self.sections.values().find(|s| s.is_index()).map(|s| s.file.parent.clone()).unwrap();
.values()
.find(|s| s.is_index())
.map(|s| (s.file.parent.clone(), s.file.path.clone()))
.unwrap();
let root_key = self.paths_to_sections[&index_path];
// We are going to get both the ancestors and grandparents for each section in one go // We are going to get both the ancestors and grandparents for each section in one go
let mut ancestors: HashMap<PathBuf, Vec<_>> = HashMap::new(); let mut ancestors: HashMap<PathBuf, Vec<_>> = HashMap::new();
let mut subsections: HashMap<PathBuf, Vec<_>> = HashMap::new(); let mut subsections: HashMap<PathBuf, Vec<_>> = HashMap::new();
@ -99,7 +97,8 @@ impl Library {
if let Some(ref grand_parent) = section.file.grand_parent { if let Some(ref grand_parent) = section.file.grand_parent {
subsections subsections
.entry(grand_parent.join("_index.md")) // Using the original filename to work for multi-lingual sections
.entry(grand_parent.join(&section.file.filename))
.or_insert_with(|| vec![]) .or_insert_with(|| vec![])
.push(section.file.path.clone()); .push(section.file.path.clone());
} }
@ -111,6 +110,7 @@ impl Library {
} }
let mut path = root_path.clone(); let mut path = root_path.clone();
let root_key = self.paths_to_sections[&root_path.join(&section.file.filename)];
// Index section is the first ancestor of every single section // Index section is the first ancestor of every single section
let mut parents = vec![root_key]; let mut parents = vec![root_key];
for component in &section.file.components { for component in &section.file.components {
@ -119,7 +119,9 @@ impl Library {
if path == section.file.parent { if path == section.file.parent {
continue; continue;
} }
if let Some(section_key) = self.paths_to_sections.get(&path.join("_index.md")) { if let Some(section_key) =
self.paths_to_sections.get(&path.join(&section.file.filename))
{
parents.push(*section_key); parents.push(*section_key);
} }
} }
@ -127,7 +129,12 @@ impl Library {
} }
for (key, page) in &mut self.pages { for (key, page) in &mut self.pages {
let mut parent_section_path = page.file.parent.join("_index.md"); let parent_filename = if page.lang != config.default_language {
format!("_index.{}.md", page.lang)
} else {
"_index.md".to_string()
};
let mut parent_section_path = page.file.parent.join(&parent_filename);
while let Some(section_key) = self.paths_to_sections.get(&parent_section_path) { while let Some(section_key) = self.paths_to_sections.get(&parent_section_path) {
let parent_is_transparent; let parent_is_transparent;
// We need to get a reference to a section later so keep the scope of borrowing small // We need to get a reference to a section later so keep the scope of borrowing small
@ -158,14 +165,15 @@ impl Library {
break; break;
} }
// We've added `_index.md` so if we are here so we need to go up twice // We've added `_index(.{LANG})?.md` so if we are here so we need to go up twice
match parent_section_path.clone().parent().unwrap().parent() { match parent_section_path.clone().parent().unwrap().parent() {
Some(parent) => parent_section_path = parent.join("_index.md"), Some(parent) => parent_section_path = parent.join(&parent_filename),
None => break, None => break,
} }
} }
} }
self.populate_translations();
self.sort_sections_pages(); self.sort_sections_pages();
let sections = self.paths_to_sections.clone(); let sections = self.paths_to_sections.clone();
@ -185,7 +193,8 @@ impl Library {
} }
} }
/// Sort all sections pages /// Sort all sections pages according to sorting method given
/// Pages that cannot be sorted are set to the section.ignored_pages instead
pub fn sort_sections_pages(&mut self) { pub fn sort_sections_pages(&mut self) {
let mut updates = HashMap::new(); let mut updates = HashMap::new();
for (key, section) in &self.sections { for (key, section) in &self.sections {
@ -265,6 +274,51 @@ impl Library {
} }
} }
/// Finds all the translations for each section/page and set the `translations`
/// field of each as needed
/// A no-op for sites without multiple languages
fn populate_translations(&mut self) {
if !self.is_multilingual {
return;
}
// Sections first
let mut sections_translations = HashMap::new();
for (key, section) in &self.sections {
sections_translations
.entry(section.file.canonical.clone()) // TODO: avoid this clone
.or_insert_with(Vec::new)
.push(key);
}
for (key, section) in self.sections.iter_mut() {
let translations = &sections_translations[&section.file.canonical];
if translations.len() == 1 {
section.translations = vec![];
continue;
}
section.translations = translations.iter().filter(|k| **k != key).cloned().collect();
}
// Same thing for pages
let mut pages_translations = HashMap::new();
for (key, page) in &self.pages {
pages_translations
.entry(page.file.canonical.clone()) // TODO: avoid this clone
.or_insert_with(Vec::new)
.push(key);
}
for (key, page) in self.pages.iter_mut() {
let translations = &pages_translations[&page.file.canonical];
if translations.len() == 1 {
page.translations = vec![];
continue;
}
page.translations = translations.iter().filter(|k| **k != key).cloned().collect();
}
}
/// Find all the orphan pages: pages that are in a folder without an `_index.md` /// Find all the orphan pages: pages that are in a folder without an `_index.md`
pub fn get_all_orphan_pages(&self) -> Vec<&Page> { pub fn get_all_orphan_pages(&self) -> Vec<&Page> {
let pages_in_sections = let pages_in_sections =
@ -277,15 +331,19 @@ impl Library {
.collect() .collect()
} }
pub fn find_parent_section<P: AsRef<Path>>(&self, path: P) -> Option<&Section> { /// Find the parent section & all grandparents section that have transparent=true
let page_key = self.paths_to_pages[path.as_ref()]; /// Only used in rebuild.
for s in self.sections.values() { pub fn find_parent_sections<P: AsRef<Path>>(&self, path: P) -> Vec<&Section> {
if s.pages.contains(&page_key) { let mut parents = vec![];
return Some(s); let page = self.get_page(path.as_ref()).unwrap();
for ancestor in page.ancestors.iter().rev() {
let section = self.get_section_by_key(*ancestor);
if parents.is_empty() || section.meta.transparent {
parents.push(section);
} }
} }
None parents
} }
/// Only used in tests /// Only used in tests

View file

@ -4,7 +4,7 @@ use slotmap::Key;
use tera::{to_value, Context, Tera, Value}; use tera::{to_value, Context, Tera, Value};
use config::Config; use config::Config;
use errors::{Result, ResultExt}; use errors::{Error, Result};
use utils::templates::render_template; use utils::templates::render_template;
use content::{Section, SerializingPage, SerializingSection}; use content::{Section, SerializingPage, SerializingSection};
@ -221,13 +221,14 @@ impl<'a> Paginator<'a> {
context.insert("current_path", &pager.path); context.insert("current_path", &pager.path);
context.insert("paginator", &self.build_paginator_context(pager)); context.insert("paginator", &self.build_paginator_context(pager));
render_template(&self.template, tera, &context, &config.theme) render_template(&self.template, tera, context, &config.theme)
.chain_err(|| format!("Failed to render pager {}", pager.index)) .map_err(|e| Error::chain(format!("Failed to render pager {}", pager.index), e))
} }
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use std::path::PathBuf;
use tera::to_value; use tera::to_value;
use config::Taxonomy as TaxonomyConfig; use config::Taxonomy as TaxonomyConfig;
@ -242,7 +243,7 @@ mod tests {
let mut f = SectionFrontMatter::default(); let mut f = SectionFrontMatter::default();
f.paginate_by = Some(2); f.paginate_by = Some(2);
f.paginate_path = "page".to_string(); f.paginate_path = "page".to_string();
let mut s = Section::new("content/_index.md", f); let mut s = Section::new("content/_index.md", f, &PathBuf::new());
if !is_index { if !is_index {
s.path = "posts/".to_string(); s.path = "posts/".to_string();
s.permalink = "https://vincent.is/posts/".to_string(); s.permalink = "https://vincent.is/posts/".to_string();
@ -254,7 +255,7 @@ mod tests {
} }
fn create_library(is_index: bool) -> (Section, Library) { fn create_library(is_index: bool) -> (Section, Library) {
let mut library = Library::new(3, 0); let mut library = Library::new(3, 0, false);
library.insert_page(Page::default()); library.insert_page(Page::default());
library.insert_page(Page::default()); library.insert_page(Page::default());
library.insert_page(Page::default()); library.insert_page(Page::default());

View file

@ -113,6 +113,7 @@ pub fn find_siblings(sorted: Vec<(&Key, bool)>) -> Vec<(Key, Option<Key>, Option
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use slotmap::DenseSlotMap; use slotmap::DenseSlotMap;
use std::path::PathBuf;
use super::{find_siblings, sort_pages_by_date, sort_pages_by_weight}; use super::{find_siblings, sort_pages_by_date, sort_pages_by_weight};
use content::Page; use content::Page;
@ -122,13 +123,13 @@ mod tests {
let mut front_matter = PageFrontMatter::default(); let mut front_matter = PageFrontMatter::default();
front_matter.date = Some(date.to_string()); front_matter.date = Some(date.to_string());
front_matter.date_to_datetime(); front_matter.date_to_datetime();
Page::new("content/hello.md", front_matter) Page::new("content/hello.md", front_matter, &PathBuf::new())
} }
fn create_page_with_weight(weight: usize) -> Page { fn create_page_with_weight(weight: usize) -> Page {
let mut front_matter = PageFrontMatter::default(); let mut front_matter = PageFrontMatter::default();
front_matter.weight = Some(weight); front_matter.weight = Some(weight);
Page::new("content/hello.md", front_matter) Page::new("content/hello.md", front_matter, &PathBuf::new())
} }
#[test] #[test]

View file

@ -5,7 +5,7 @@ use slug::slugify;
use tera::{Context, Tera}; use tera::{Context, Tera};
use config::{Config, Taxonomy as TaxonomyConfig}; use config::{Config, Taxonomy as TaxonomyConfig};
use errors::{Result, ResultExt}; use errors::{Error, Result};
use utils::templates::render_template; use utils::templates::render_template;
use content::SerializingPage; use content::SerializingPage;
@ -48,7 +48,13 @@ pub struct TaxonomyItem {
} }
impl TaxonomyItem { impl TaxonomyItem {
pub fn new(name: &str, path: &str, config: &Config, keys: Vec<Key>, library: &Library) -> Self { pub fn new(
name: &str,
taxonomy: &TaxonomyConfig,
config: &Config,
keys: Vec<Key>,
library: &Library,
) -> Self {
// Taxonomy are almost always used for blogs so we filter by dates // Taxonomy are almost always used for blogs so we filter by dates
// and it's not like we can sort things across sections by anything other // and it's not like we can sort things across sections by anything other
// than dates // than dates
@ -64,7 +70,11 @@ impl TaxonomyItem {
.collect(); .collect();
let (mut pages, ignored_pages) = sort_pages_by_date(data); let (mut pages, ignored_pages) = sort_pages_by_date(data);
let slug = slugify(name); let slug = slugify(name);
let permalink = config.make_permalink(&format!("/{}/{}", path, slug)); let permalink = if taxonomy.lang != config.default_language {
config.make_permalink(&format!("/{}/{}/{}", taxonomy.lang, taxonomy.name, slug))
} else {
config.make_permalink(&format!("/{}/{}", taxonomy.name, slug))
};
// We still append pages without dates at the end // We still append pages without dates at the end
pages.extend(ignored_pages); pages.extend(ignored_pages);
@ -108,7 +118,7 @@ impl Taxonomy {
) -> Taxonomy { ) -> Taxonomy {
let mut sorted_items = vec![]; let mut sorted_items = vec![];
for (name, pages) in items { for (name, pages) in items {
sorted_items.push(TaxonomyItem::new(&name, &kind.name, config, pages, library)); sorted_items.push(TaxonomyItem::new(&name, &kind, config, pages, library));
} }
sorted_items.sort_by(|a, b| a.name.cmp(&b.name)); sorted_items.sort_by(|a, b| a.name.cmp(&b.name));
@ -140,8 +150,10 @@ impl Taxonomy {
); );
context.insert("current_path", &format!("/{}/{}", self.kind.name, item.slug)); context.insert("current_path", &format!("/{}/{}", self.kind.name, item.slug));
render_template(&format!("{}/single.html", self.kind.name), tera, &context, &config.theme) render_template(&format!("{}/single.html", self.kind.name), tera, context, &config.theme)
.chain_err(|| format!("Failed to render single term {} page.", self.kind.name)) .map_err(|e| {
Error::chain(format!("Failed to render single term {} page.", self.kind.name), e)
})
} }
pub fn render_all_terms( pub fn render_all_terms(
@ -159,8 +171,10 @@ impl Taxonomy {
context.insert("current_url", &config.make_permalink(&self.kind.name)); context.insert("current_url", &config.make_permalink(&self.kind.name));
context.insert("current_path", &self.kind.name); context.insert("current_path", &self.kind.name);
render_template(&format!("{}/list.html", self.kind.name), tera, &context, &config.theme) render_template(&format!("{}/list.html", self.kind.name), tera, context, &config.theme)
.chain_err(|| format!("Failed to render a list of {} page.", self.kind.name)) .map_err(|e| {
Error::chain(format!("Failed to render a list of {} page.", self.kind.name), e)
})
} }
pub fn to_serialized<'a>(&'a self, library: &'a Library) -> SerializedTaxonomy<'a> { pub fn to_serialized<'a>(&'a self, library: &'a Library) -> SerializedTaxonomy<'a> {
@ -186,6 +200,14 @@ pub fn find_taxonomies(config: &Config, library: &Library) -> Result<Vec<Taxonom
for (name, val) in &page.meta.taxonomies { for (name, val) in &page.meta.taxonomies {
if taxonomies_def.contains_key(name) { if taxonomies_def.contains_key(name) {
if taxonomies_def[name].lang != page.lang {
bail!(
"Page `{}` has taxonomy `{}` which is not available in that language",
page.file.path.display(),
name
);
}
all_taxonomies.entry(name).or_insert_with(HashMap::new); all_taxonomies.entry(name).or_insert_with(HashMap::new);
for v in val { for v in val {
@ -220,19 +242,31 @@ mod tests {
use super::*; use super::*;
use std::collections::HashMap; use std::collections::HashMap;
use config::{Config, Taxonomy as TaxonomyConfig}; use config::{Config, Language, Taxonomy as TaxonomyConfig};
use content::Page; use content::Page;
use library::Library; use library::Library;
#[test] #[test]
fn can_make_taxonomies() { fn can_make_taxonomies() {
let mut config = Config::default(); let mut config = Config::default();
let mut library = Library::new(2, 0); let mut library = Library::new(2, 0, false);
config.taxonomies = vec![ config.taxonomies = vec![
TaxonomyConfig { name: "categories".to_string(), ..TaxonomyConfig::default() }, TaxonomyConfig {
TaxonomyConfig { name: "tags".to_string(), ..TaxonomyConfig::default() }, name: "categories".to_string(),
TaxonomyConfig { name: "authors".to_string(), ..TaxonomyConfig::default() }, lang: config.default_language.clone(),
..TaxonomyConfig::default()
},
TaxonomyConfig {
name: "tags".to_string(),
lang: config.default_language.clone(),
..TaxonomyConfig::default()
},
TaxonomyConfig {
name: "authors".to_string(),
lang: config.default_language.clone(),
..TaxonomyConfig::default()
},
]; ];
let mut page1 = Page::default(); let mut page1 = Page::default();
@ -240,6 +274,7 @@ mod tests {
taxo_page1.insert("tags".to_string(), vec!["rust".to_string(), "db".to_string()]); taxo_page1.insert("tags".to_string(), vec!["rust".to_string(), "db".to_string()]);
taxo_page1.insert("categories".to_string(), vec!["Programming tutorials".to_string()]); taxo_page1.insert("categories".to_string(), vec!["Programming tutorials".to_string()]);
page1.meta.taxonomies = taxo_page1; page1.meta.taxonomies = taxo_page1;
page1.lang = config.default_language.clone();
library.insert_page(page1); library.insert_page(page1);
let mut page2 = Page::default(); let mut page2 = Page::default();
@ -247,6 +282,7 @@ mod tests {
taxo_page2.insert("tags".to_string(), vec!["rust".to_string(), "js".to_string()]); taxo_page2.insert("tags".to_string(), vec!["rust".to_string(), "js".to_string()]);
taxo_page2.insert("categories".to_string(), vec!["Other".to_string()]); taxo_page2.insert("categories".to_string(), vec!["Other".to_string()]);
page2.meta.taxonomies = taxo_page2; page2.meta.taxonomies = taxo_page2;
page2.lang = config.default_language.clone();
library.insert_page(page2); library.insert_page(page2);
let mut page3 = Page::default(); let mut page3 = Page::default();
@ -254,6 +290,7 @@ mod tests {
taxo_page3.insert("tags".to_string(), vec!["js".to_string()]); taxo_page3.insert("tags".to_string(), vec!["js".to_string()]);
taxo_page3.insert("authors".to_string(), vec!["Vincent Prouillet".to_string()]); taxo_page3.insert("authors".to_string(), vec!["Vincent Prouillet".to_string()]);
page3.meta.taxonomies = taxo_page3; page3.meta.taxonomies = taxo_page3;
page3.lang = config.default_language.clone();
library.insert_page(page3); library.insert_page(page3);
let taxonomies = find_taxonomies(&config, &library).unwrap(); let taxonomies = find_taxonomies(&config, &library).unwrap();
@ -307,11 +344,140 @@ mod tests {
#[test] #[test]
fn errors_on_unknown_taxonomy() { fn errors_on_unknown_taxonomy() {
let mut config = Config::default(); let mut config = Config::default();
let mut library = Library::new(2, 0); let mut library = Library::new(2, 0, false);
config.taxonomies = vec![TaxonomyConfig {
name: "authors".to_string(),
lang: config.default_language.clone(),
..TaxonomyConfig::default()
}];
let mut page1 = Page::default();
let mut taxo_page1 = HashMap::new();
taxo_page1.insert("tags".to_string(), vec!["rust".to_string(), "db".to_string()]);
page1.meta.taxonomies = taxo_page1;
page1.lang = config.default_language.clone();
library.insert_page(page1);
let taxonomies = find_taxonomies(&config, &library);
assert!(taxonomies.is_err());
let err = taxonomies.unwrap_err();
// no path as this is created by Default
assert_eq!(
format!("{}", err),
"Page `` has taxonomy `tags` which is not defined in config.toml"
);
}
#[test]
fn can_make_taxonomies_in_multiple_languages() {
let mut config = Config::default();
config.languages.push(Language { rss: false, code: "fr".to_string() });
let mut library = Library::new(2, 0, true);
config.taxonomies = vec![
TaxonomyConfig {
name: "categories".to_string(),
lang: config.default_language.clone(),
..TaxonomyConfig::default()
},
TaxonomyConfig {
name: "tags".to_string(),
lang: config.default_language.clone(),
..TaxonomyConfig::default()
},
TaxonomyConfig {
name: "auteurs".to_string(),
lang: "fr".to_string(),
..TaxonomyConfig::default()
},
];
let mut page1 = Page::default();
let mut taxo_page1 = HashMap::new();
taxo_page1.insert("tags".to_string(), vec!["rust".to_string(), "db".to_string()]);
taxo_page1.insert("categories".to_string(), vec!["Programming tutorials".to_string()]);
page1.meta.taxonomies = taxo_page1;
page1.lang = config.default_language.clone();
library.insert_page(page1);
let mut page2 = Page::default();
let mut taxo_page2 = HashMap::new();
taxo_page2.insert("tags".to_string(), vec!["rust".to_string()]);
taxo_page2.insert("categories".to_string(), vec!["Other".to_string()]);
page2.meta.taxonomies = taxo_page2;
page2.lang = config.default_language.clone();
library.insert_page(page2);
let mut page3 = Page::default();
page3.lang = "fr".to_string();
let mut taxo_page3 = HashMap::new();
taxo_page3.insert("auteurs".to_string(), vec!["Vincent Prouillet".to_string()]);
page3.meta.taxonomies = taxo_page3;
library.insert_page(page3);
let taxonomies = find_taxonomies(&config, &library).unwrap();
let (tags, categories, authors) = {
let mut t = None;
let mut c = None;
let mut a = None;
for x in taxonomies {
match x.kind.name.as_ref() {
"tags" => t = Some(x),
"categories" => c = Some(x),
"auteurs" => a = Some(x),
_ => unreachable!(),
}
}
(t.unwrap(), c.unwrap(), a.unwrap())
};
assert_eq!(tags.items.len(), 2);
assert_eq!(categories.items.len(), 2);
assert_eq!(authors.items.len(), 1);
assert_eq!(tags.items[0].name, "db");
assert_eq!(tags.items[0].slug, "db");
assert_eq!(tags.items[0].permalink, "http://a-website.com/tags/db/");
assert_eq!(tags.items[0].pages.len(), 1);
assert_eq!(tags.items[1].name, "rust");
assert_eq!(tags.items[1].slug, "rust");
assert_eq!(tags.items[1].permalink, "http://a-website.com/tags/rust/");
assert_eq!(tags.items[1].pages.len(), 2);
assert_eq!(authors.items[0].name, "Vincent Prouillet");
assert_eq!(authors.items[0].slug, "vincent-prouillet");
assert_eq!(
authors.items[0].permalink,
"http://a-website.com/fr/auteurs/vincent-prouillet/"
);
assert_eq!(authors.items[0].pages.len(), 1);
assert_eq!(categories.items[0].name, "Other");
assert_eq!(categories.items[0].slug, "other");
assert_eq!(categories.items[0].permalink, "http://a-website.com/categories/other/");
assert_eq!(categories.items[0].pages.len(), 1);
assert_eq!(categories.items[1].name, "Programming tutorials");
assert_eq!(categories.items[1].slug, "programming-tutorials");
assert_eq!(
categories.items[1].permalink,
"http://a-website.com/categories/programming-tutorials/"
);
assert_eq!(categories.items[1].pages.len(), 1);
}
#[test]
fn errors_on_taxonomy_of_different_language() {
let mut config = Config::default();
config.languages.push(Language { rss: false, code: "fr".to_string() });
let mut library = Library::new(2, 0, false);
config.taxonomies = config.taxonomies =
vec![TaxonomyConfig { name: "authors".to_string(), ..TaxonomyConfig::default() }]; vec![TaxonomyConfig { name: "tags".to_string(), ..TaxonomyConfig::default() }];
let mut page1 = Page::default(); let mut page1 = Page::default();
page1.lang = "fr".to_string();
let mut taxo_page1 = HashMap::new(); let mut taxo_page1 = HashMap::new();
taxo_page1.insert("tags".to_string(), vec!["rust".to_string(), "db".to_string()]); taxo_page1.insert("tags".to_string(), vec!["rust".to_string(), "db".to_string()]);
page1.meta.taxonomies = taxo_page1; page1.meta.taxonomies = taxo_page1;
@ -322,8 +488,8 @@ mod tests {
let err = taxonomies.unwrap_err(); let err = taxonomies.unwrap_err();
// no path as this is created by Default // no path as this is created by Default
assert_eq!( assert_eq!(
err.description(), format!("{}", err),
"Page `` has taxonomy `tags` which is not defined in config.toml" "Page `` has taxonomy `tags` which is not available in that language"
); );
} }
} }

View file

@ -98,25 +98,27 @@ fn find_page_front_matter_changes(
/// Handles a path deletion: could be a page, a section, a folder /// Handles a path deletion: could be a page, a section, a folder
fn delete_element(site: &mut Site, path: &Path, is_section: bool) -> Result<()> { fn delete_element(site: &mut Site, path: &Path, is_section: bool) -> Result<()> {
{
let mut library = site.library.write().unwrap();
// Ignore the event if this path was not known // Ignore the event if this path was not known
if !site.library.contains_section(&path.to_path_buf()) if !library.contains_section(&path.to_path_buf())
&& !site.library.contains_page(&path.to_path_buf()) && !library.contains_page(&path.to_path_buf())
{ {
return Ok(()); return Ok(());
} }
if is_section { if is_section {
if let Some(s) = site.library.remove_section(&path.to_path_buf()) { if let Some(s) = library.remove_section(&path.to_path_buf()) {
site.permalinks.remove(&s.file.relative); site.permalinks.remove(&s.file.relative);
} }
} else if let Some(p) = site.library.remove_page(&path.to_path_buf()) { } else if let Some(p) = library.remove_page(&path.to_path_buf()) {
site.permalinks.remove(&p.file.relative); site.permalinks.remove(&p.file.relative);
if !p.meta.taxonomies.is_empty() {
site.populate_taxonomies()?;
} }
} }
// We might have delete the root _index.md so ensure we have at least the default one
// before populating
site.create_default_index_sections()?;
site.populate_sections(); site.populate_sections();
site.populate_taxonomies()?; site.populate_taxonomies()?;
// Ensure we have our fn updated so it doesn't contain the permalink(s)/section/page deleted // Ensure we have our fn updated so it doesn't contain the permalink(s)/section/page deleted
@ -129,35 +131,41 @@ fn delete_element(site: &mut Site, path: &Path, is_section: bool) -> Result<()>
/// Handles a `_index.md` (a section) being edited in some ways /// Handles a `_index.md` (a section) being edited in some ways
fn handle_section_editing(site: &mut Site, path: &Path) -> Result<()> { fn handle_section_editing(site: &mut Site, path: &Path) -> Result<()> {
let section = Section::from_file(path, &site.config)?; let section = Section::from_file(path, &site.config, &site.base_path)?;
let pathbuf = path.to_path_buf(); let pathbuf = path.to_path_buf();
match site.add_section(section, true)? { match site.add_section(section, true)? {
// Updating a section // Updating a section
Some(prev) => { Some(prev) => {
site.populate_sections(); site.populate_sections();
{
let library = site.library.read().unwrap();
if site.library.get_section(&pathbuf).unwrap().meta == prev.meta { if library.get_section(&pathbuf).unwrap().meta == prev.meta {
// Front matter didn't change, only content did // Front matter didn't change, only content did
// so we render only the section page, not its pages // so we render only the section page, not its pages
return site.render_section(&site.library.get_section(&pathbuf).unwrap(), false); return site.render_section(&library.get_section(&pathbuf).unwrap(), false);
}
} }
// Front matter changed // Front matter changed
for changes in find_section_front_matter_changes( let changes = find_section_front_matter_changes(
&site.library.get_section(&pathbuf).unwrap().meta, &site.library.read().unwrap().get_section(&pathbuf).unwrap().meta,
&prev.meta, &prev.meta,
) { );
for change in changes {
// Sort always comes first if present so the rendering will be fine // Sort always comes first if present so the rendering will be fine
match changes { match change {
SectionChangesNeeded::Sort => { SectionChangesNeeded::Sort => {
site.register_tera_global_fns(); site.register_tera_global_fns();
} }
SectionChangesNeeded::Render => { SectionChangesNeeded::Render => site.render_section(
site.render_section(&site.library.get_section(&pathbuf).unwrap(), false)? &site.library.read().unwrap().get_section(&pathbuf).unwrap(),
} false,
SectionChangesNeeded::RenderWithPages => { )?,
site.render_section(&site.library.get_section(&pathbuf).unwrap(), true)? SectionChangesNeeded::RenderWithPages => site.render_section(
} &site.library.read().unwrap().get_section(&pathbuf).unwrap(),
true,
)?,
// not a common enough operation to make it worth optimizing // not a common enough operation to make it worth optimizing
SectionChangesNeeded::Delete | SectionChangesNeeded::Transparent => { SectionChangesNeeded::Delete | SectionChangesNeeded::Transparent => {
site.build()?; site.build()?;
@ -170,49 +178,54 @@ fn handle_section_editing(site: &mut Site, path: &Path) -> Result<()> {
None => { None => {
site.populate_sections(); site.populate_sections();
site.register_tera_global_fns(); site.register_tera_global_fns();
site.render_section(&site.library.get_section(&pathbuf).unwrap(), true) site.render_section(&site.library.read().unwrap().get_section(&pathbuf).unwrap(), true)
} }
} }
} }
macro_rules! render_parent_section { macro_rules! render_parent_sections {
($site: expr, $path: expr) => { ($site: expr, $path: expr) => {
if let Some(s) = $site.library.find_parent_section($path) { for s in $site.library.read().unwrap().find_parent_sections($path) {
$site.render_section(s, false)?; $site.render_section(s, false)?;
}; }
}; };
} }
/// Handles a page being edited in some ways /// Handles a page being edited in some ways
fn handle_page_editing(site: &mut Site, path: &Path) -> Result<()> { fn handle_page_editing(site: &mut Site, path: &Path) -> Result<()> {
let page = Page::from_file(path, &site.config)?; let page = Page::from_file(path, &site.config, &site.base_path)?;
let pathbuf = path.to_path_buf(); let pathbuf = path.to_path_buf();
match site.add_page(page, true)? { match site.add_page(page, true)? {
// Updating a page // Updating a page
Some(prev) => { Some(prev) => {
site.populate_sections(); site.populate_sections();
site.populate_taxonomies()?; site.populate_taxonomies()?;
site.register_tera_global_fns();
{
let library = site.library.read().unwrap();
// Front matter didn't change, only content did // Front matter didn't change, only content did
if site.library.get_page(&pathbuf).unwrap().meta == prev.meta { if library.get_page(&pathbuf).unwrap().meta == prev.meta {
// Other than the page itself, the summary might be seen // Other than the page itself, the summary might be seen
// on a paginated list for a blog for example // on a paginated list for a blog for example
if site.library.get_page(&pathbuf).unwrap().summary.is_some() { if library.get_page(&pathbuf).unwrap().summary.is_some() {
render_parent_section!(site, path); render_parent_sections!(site, path);
}
return site.render_page(&library.get_page(&pathbuf).unwrap());
} }
site.register_tera_global_fns();
return site.render_page(&site.library.get_page(&pathbuf).unwrap());
} }
// Front matter changed // Front matter changed
for changes in find_page_front_matter_changes( let changes = find_page_front_matter_changes(
&site.library.get_page(&pathbuf).unwrap().meta, &site.library.read().unwrap().get_page(&pathbuf).unwrap().meta,
&prev.meta, &prev.meta,
) { );
for change in changes {
site.register_tera_global_fns(); site.register_tera_global_fns();
// Sort always comes first if present so the rendering will be fine // Sort always comes first if present so the rendering will be fine
match changes { match change {
PageChangesNeeded::Taxonomies => { PageChangesNeeded::Taxonomies => {
site.populate_taxonomies()?; site.populate_taxonomies()?;
site.render_taxonomies()?; site.render_taxonomies()?;
@ -221,8 +234,10 @@ fn handle_page_editing(site: &mut Site, path: &Path) -> Result<()> {
site.render_index()?; site.render_index()?;
} }
PageChangesNeeded::Render => { PageChangesNeeded::Render => {
render_parent_section!(site, path); render_parent_sections!(site, path);
site.render_page(&site.library.get_page(&path.to_path_buf()).unwrap())?; site.render_page(
&site.library.read().unwrap().get_page(&path.to_path_buf()).unwrap(),
)?;
} }
}; };
} }
@ -275,8 +290,11 @@ pub fn after_content_rename(site: &mut Site, old: &Path, new: &Path) -> Result<(
if new_path.file_name().unwrap() == "_index.md" { if new_path.file_name().unwrap() == "_index.md" {
// We aren't entirely sure where the original thing was so just try to delete whatever was // We aren't entirely sure where the original thing was so just try to delete whatever was
// at the old path // at the old path
site.library.remove_page(&old.to_path_buf()); {
site.library.remove_section(&old.to_path_buf()); let mut library = site.library.write().unwrap();
library.remove_page(&old.to_path_buf());
library.remove_section(&old.to_path_buf());
}
return handle_section_editing(site, &new_path); return handle_section_editing(site, &new_path);
} }
@ -287,8 +305,8 @@ pub fn after_content_rename(site: &mut Site, old: &Path, new: &Path) -> Result<(
} else { } else {
old.to_path_buf() old.to_path_buf()
}; };
site.library.remove_page(&old_path); site.library.write().unwrap().remove_page(&old_path);
return handle_page_editing(site, &new_path); handle_page_editing(site, &new_path)
} }
/// What happens when a section or a page is created/edited /// What happens when a section or a page is created/edited
@ -297,9 +315,15 @@ pub fn after_content_change(site: &mut Site, path: &Path) -> Result<()> {
let is_md = path.extension().unwrap() == "md"; let is_md = path.extension().unwrap() == "md";
let index = path.parent().unwrap().join("index.md"); let index = path.parent().unwrap().join("index.md");
let mut potential_indices = vec![path.parent().unwrap().join("index.md")];
for language in &site.config.languages {
potential_indices.push(path.parent().unwrap().join(format!("index.{}.md", language.code)));
}
let colocated_index = potential_indices.contains(&path.to_path_buf());
// A few situations can happen: // A few situations can happen:
// 1. Change on .md files // 1. Change on .md files
// a. Is there an `index.md`? Return an error if it's something other than delete // a. Is there already an `index.md`? Return an error if it's something other than delete
// b. Deleted? remove the element // b. Deleted? remove the element
// c. Edited? // c. Edited?
// 1. filename is `_index.md`, this is a section // 1. filename is `_index.md`, this is a section
@ -315,9 +339,9 @@ pub fn after_content_change(site: &mut Site, path: &Path) -> Result<()> {
} }
// Added another .md in a assets directory // Added another .md in a assets directory
if index.exists() && path.exists() && path != index { if index.exists() && path.exists() && !colocated_index {
bail!( bail!(
"Change on {:?} detected but there is already an `index.md` in the same folder", "Change on {:?} detected but only files named `index.md` with an optional language code are allowed",
path.display() path.display()
); );
} else if index.exists() && !path.exists() { } else if index.exists() && !path.exists() {
@ -344,7 +368,8 @@ pub fn after_template_change(site: &mut Site, path: &Path) -> Result<()> {
match filename { match filename {
"sitemap.xml" => site.render_sitemap(), "sitemap.xml" => site.render_sitemap(),
"rss.xml" => site.render_rss_feed(site.library.pages_values(), None), "rss.xml" => site.render_rss_feed(site.library.read().unwrap().pages_values(), None),
"split_sitemap_index.xml" => site.render_sitemap(),
"robots.txt" => site.render_robots(), "robots.txt" => site.render_robots(),
"single.html" | "list.html" => site.render_taxonomies(), "single.html" | "list.html" => site.render_taxonomies(),
"page.html" => { "page.html" => {

View file

@ -16,15 +16,15 @@ use rebuild::{after_content_change, after_content_rename};
// Loads the test_site in a tempdir and build it there // Loads the test_site in a tempdir and build it there
// Returns (site_path_in_tempdir, site) // Returns (site_path_in_tempdir, site)
macro_rules! load_and_build_site { macro_rules! load_and_build_site {
($tmp_dir: expr) => {{ ($tmp_dir: expr, $site: expr) => {{
let mut path = let mut path =
env::current_dir().unwrap().parent().unwrap().parent().unwrap().to_path_buf(); env::current_dir().unwrap().parent().unwrap().parent().unwrap().to_path_buf();
path.push("test_site"); path.push($site);
let mut options = dir::CopyOptions::new(); let mut options = dir::CopyOptions::new();
options.copy_inside = true; options.copy_inside = true;
dir::copy(&path, &$tmp_dir, &options).unwrap(); dir::copy(&path, &$tmp_dir, &options).unwrap();
let site_path = $tmp_dir.path().join("test_site"); let site_path = $tmp_dir.path().join($site);
let mut site = Site::new(&site_path, "config.toml").unwrap(); let mut site = Site::new(&site_path, "config.toml").unwrap();
site.load().unwrap(); site.load().unwrap();
let public = &site_path.join("public"); let public = &site_path.join("public");
@ -81,7 +81,7 @@ macro_rules! rename {
#[test] #[test]
fn can_rebuild_after_simple_change_to_page_content() { fn can_rebuild_after_simple_change_to_page_content() {
let tmp_dir = tempdir().expect("create temp dir"); let tmp_dir = tempdir().expect("create temp dir");
let (site_path, mut site) = load_and_build_site!(tmp_dir); let (site_path, mut site) = load_and_build_site!(tmp_dir, "test_site");
let file_path = edit_file!( let file_path = edit_file!(
site_path, site_path,
"content/rebuild/first.md", "content/rebuild/first.md",
@ -103,7 +103,7 @@ Some content"#
#[test] #[test]
fn can_rebuild_after_title_change_page_global_func_usage() { fn can_rebuild_after_title_change_page_global_func_usage() {
let tmp_dir = tempdir().expect("create temp dir"); let tmp_dir = tempdir().expect("create temp dir");
let (site_path, mut site) = load_and_build_site!(tmp_dir); let (site_path, mut site) = load_and_build_site!(tmp_dir, "test_site");
let file_path = edit_file!( let file_path = edit_file!(
site_path, site_path,
"content/rebuild/first.md", "content/rebuild/first.md",
@ -125,7 +125,7 @@ date = 2017-01-01
#[test] #[test]
fn can_rebuild_after_sort_change_in_section() { fn can_rebuild_after_sort_change_in_section() {
let tmp_dir = tempdir().expect("create temp dir"); let tmp_dir = tempdir().expect("create temp dir");
let (site_path, mut site) = load_and_build_site!(tmp_dir); let (site_path, mut site) = load_and_build_site!(tmp_dir, "test_site");
let file_path = edit_file!( let file_path = edit_file!(
site_path, site_path,
"content/rebuild/_index.md", "content/rebuild/_index.md",
@ -150,7 +150,7 @@ template = "rebuild.html"
#[test] #[test]
fn can_rebuild_after_transparent_change() { fn can_rebuild_after_transparent_change() {
let tmp_dir = tempdir().expect("create temp dir"); let tmp_dir = tempdir().expect("create temp dir");
let (site_path, mut site) = load_and_build_site!(tmp_dir); let (site_path, mut site) = load_and_build_site!(tmp_dir, "test_site");
let file_path = edit_file!( let file_path = edit_file!(
site_path, site_path,
"content/posts/2018/_index.md", "content/posts/2018/_index.md",
@ -182,7 +182,7 @@ insert_anchor_links = "left"
#[test] #[test]
fn can_rebuild_after_renaming_page() { fn can_rebuild_after_renaming_page() {
let tmp_dir = tempdir().expect("create temp dir"); let tmp_dir = tempdir().expect("create temp dir");
let (site_path, mut site) = load_and_build_site!(tmp_dir); let (site_path, mut site) = load_and_build_site!(tmp_dir, "test_site");
let (old_path, new_path) = rename!(site_path, "content/posts/simple.md", "hard.md"); let (old_path, new_path) = rename!(site_path, "content/posts/simple.md", "hard.md");
let res = after_content_rename(&mut site, &old_path, &new_path); let res = after_content_rename(&mut site, &old_path, &new_path);
@ -195,7 +195,7 @@ fn can_rebuild_after_renaming_page() {
#[test] #[test]
fn can_rebuild_after_renaming_colocated_asset_folder() { fn can_rebuild_after_renaming_colocated_asset_folder() {
let tmp_dir = tempdir().expect("create temp dir"); let tmp_dir = tempdir().expect("create temp dir");
let (site_path, mut site) = load_and_build_site!(tmp_dir); let (site_path, mut site) = load_and_build_site!(tmp_dir, "test_site");
let (old_path, new_path) = let (old_path, new_path) =
rename!(site_path, "content/posts/with-assets", "with-assets-updated"); rename!(site_path, "content/posts/with-assets", "with-assets-updated");
assert!(file_contains!(site_path, "content/posts/with-assets-updated/index.md", "Hello")); assert!(file_contains!(site_path, "content/posts/with-assets-updated/index.md", "Hello"));
@ -214,7 +214,7 @@ fn can_rebuild_after_renaming_colocated_asset_folder() {
#[test] #[test]
fn can_rebuild_after_renaming_section_folder() { fn can_rebuild_after_renaming_section_folder() {
let tmp_dir = tempdir().expect("create temp dir"); let tmp_dir = tempdir().expect("create temp dir");
let (site_path, mut site) = load_and_build_site!(tmp_dir); let (site_path, mut site) = load_and_build_site!(tmp_dir, "test_site");
let (old_path, new_path) = rename!(site_path, "content/posts", "new-posts"); let (old_path, new_path) = rename!(site_path, "content/posts", "new-posts");
assert!(file_contains!(site_path, "content/new-posts/simple.md", "simple")); assert!(file_contains!(site_path, "content/new-posts/simple.md", "simple"));
@ -227,7 +227,7 @@ fn can_rebuild_after_renaming_section_folder() {
#[test] #[test]
fn can_rebuild_after_renaming_non_md_asset_in_colocated_folder() { fn can_rebuild_after_renaming_non_md_asset_in_colocated_folder() {
let tmp_dir = tempdir().expect("create temp dir"); let tmp_dir = tempdir().expect("create temp dir");
let (site_path, mut site) = load_and_build_site!(tmp_dir); let (site_path, mut site) = load_and_build_site!(tmp_dir, "test_site");
let (old_path, new_path) = let (old_path, new_path) =
rename!(site_path, "content/posts/with-assets/zola.png", "gutenberg.png"); rename!(site_path, "content/posts/with-assets/zola.png", "gutenberg.png");
@ -239,7 +239,7 @@ fn can_rebuild_after_renaming_non_md_asset_in_colocated_folder() {
#[test] #[test]
fn can_rebuild_after_deleting_file() { fn can_rebuild_after_deleting_file() {
let tmp_dir = tempdir().expect("create temp dir"); let tmp_dir = tempdir().expect("create temp dir");
let (site_path, mut site) = load_and_build_site!(tmp_dir); let (site_path, mut site) = load_and_build_site!(tmp_dir, "test_site");
let path = site_path.join("content").join("posts").join("fixed-slug.md"); let path = site_path.join("content").join("posts").join("fixed-slug.md");
fs::remove_file(&path).unwrap(); fs::remove_file(&path).unwrap();
@ -247,3 +247,42 @@ fn can_rebuild_after_deleting_file() {
println!("{:?}", res); println!("{:?}", res);
assert!(res.is_ok()); assert!(res.is_ok());
} }
#[test]
fn can_rebuild_after_editing_in_colocated_asset_folder_with_language() {
let tmp_dir = tempdir().expect("create temp dir");
let (site_path, mut site) = load_and_build_site!(tmp_dir, "test_site_i18n");
let file_path = edit_file!(
site_path,
"content/blog/with-assets/index.fr.md",
br#"
+++
date = 2018-11-11
+++
Edite
"#
);
let res = after_content_change(&mut site, &file_path);
println!("{:?}", res);
assert!(res.is_ok());
assert!(file_contains!(site_path, "public/fr/blog/with-assets/index.html", "Edite"));
}
// https://github.com/getzola/zola/issues/620
#[test]
fn can_rebuild_after_renaming_section_and_deleting_file() {
let tmp_dir = tempdir().expect("create temp dir");
let (site_path, mut site) = load_and_build_site!(tmp_dir, "test_site");
let (old_path, new_path) = rename!(site_path, "content/posts/", "post/");
let res = after_content_rename(&mut site, &old_path, &new_path);
assert!(res.is_ok());
let path = site_path.join("content").join("_index.md");
fs::remove_file(&path).unwrap();
let res = after_content_change(&mut site, &path);
println!("{:?}", res);
assert!(res.is_ok());
}

View file

@ -4,9 +4,9 @@ version = "0.1.0"
authors = ["Vincent Prouillet <prouillet.vincent@gmail.com>"] authors = ["Vincent Prouillet <prouillet.vincent@gmail.com>"]
[dependencies] [dependencies]
tera = { version = "0.11", features = ["preserve_order"] } tera = { version = "1.0.0-alpha.3", features = ["preserve_order"] }
syntect = "3" syntect = "3"
pulldown-cmark = "0.2" pulldown-cmark = "0.4"
slug = "0.1" slug = "0.1"
serde = "1" serde = "1"
serde_derive = "1" serde_derive = "1"

View file

@ -1,6 +1,3 @@
use std::borrow::Cow::{Borrowed, Owned};
use self::cmark::{Event, Options, Parser, Tag};
use pulldown_cmark as cmark; use pulldown_cmark as cmark;
use slug::slugify; use slug::slugify;
use syntect::easy::HighlightLines; use syntect::easy::HighlightLines;
@ -9,14 +6,19 @@ use syntect::html::{
}; };
use config::highlighting::{get_highlighter, SYNTAX_SET, THEME_SET}; use config::highlighting::{get_highlighter, SYNTAX_SET, THEME_SET};
use errors::Result;
use link_checker::check_url;
use utils::site::resolve_internal_link;
use context::RenderContext; use context::RenderContext;
use table_of_contents::{make_table_of_contents, Header, TempHeader}; use errors::{Error, Result};
use front_matter::InsertAnchor;
use link_checker::check_url;
use table_of_contents::{make_table_of_contents, Header};
use utils::site::resolve_internal_link;
use utils::vec::InsertMany;
const CONTINUE_READING: &str = "<p><a name=\"continue-reading\"></a></p>\n"; use self::cmark::{Event, LinkType, Options, Parser, Tag};
const CONTINUE_READING: &str =
"<p id=\"zola-continue-reading\"><a name=\"continue-reading\"></a></p>\n";
const ANCHOR_LINK_TEMPLATE: &str = "anchor-link.html";
#[derive(Debug)] #[derive(Debug)]
pub struct Rendered { pub struct Rendered {
@ -25,6 +27,20 @@ pub struct Rendered {
pub toc: Vec<Header>, pub toc: Vec<Header>,
} }
// tracks a header in a slice of pulldown-cmark events
#[derive(Debug)]
struct HeaderRef {
start_idx: usize,
end_idx: usize,
level: i32,
}
impl HeaderRef {
fn new(start: usize, level: i32) -> HeaderRef {
HeaderRef { start_idx: start, end_idx: 0, level }
}
}
// We might have cases where the slug is already present in our list of anchor // We might have cases where the slug is already present in our list of anchor
// for example an article could have several titles named Example // for example an article could have several titles named Example
// We add a counter after the slug if the slug is already present, which // We add a counter after the slug if the slug is already present, which
@ -49,111 +65,19 @@ fn is_colocated_asset_link(link: &str) -> bool {
&& !link.starts_with("mailto:") && !link.starts_with("mailto:")
} }
pub fn markdown_to_html(content: &str, context: &RenderContext) -> Result<Rendered> { fn fix_link(link_type: LinkType, link: &str, context: &RenderContext) -> Result<String> {
// the rendered html if link_type == LinkType::Email {
let mut html = String::with_capacity(content.len()); return Ok(link.to_string());
// Set while parsing
let mut error = None;
let mut background = IncludeBackground::Yes;
let mut highlighter: Option<(HighlightLines, bool)> = None;
// If we get text in header, we need to insert the id and a anchor
let mut in_header = false;
// pulldown_cmark can send several text events for a title if there are markdown
// specific characters like `!` in them. We only want to insert the anchor the first time
let mut header_created = false;
let mut anchors: Vec<String> = vec![];
let mut headers = vec![];
// Defaults to a 0 level so not a real header
// It should be an Option ideally but not worth the hassle to update
let mut temp_header = TempHeader::default();
let mut opts = Options::empty();
let mut has_summary = false;
opts.insert(Options::ENABLE_TABLES);
opts.insert(Options::ENABLE_FOOTNOTES);
{
let parser = Parser::new_ext(content, opts).map(|event| {
match event {
Event::Text(text) => {
// Header first
if in_header {
if header_created {
temp_header.add_text(&text);
return Event::Html(Borrowed(""));
} }
// += as we might have some <code> or other things already there
temp_header.add_text(&text);
header_created = true;
return Event::Html(Borrowed(""));
}
// if we are in the middle of a code block
if let Some((ref mut highlighter, in_extra)) = highlighter {
let highlighted = if in_extra {
if let Some(ref extra) = context.config.extra_syntax_set {
highlighter.highlight(&text, &extra)
} else {
unreachable!("Got a highlighter from extra syntaxes but no extra?");
}
} else {
highlighter.highlight(&text, &SYNTAX_SET)
};
//let highlighted = &highlighter.highlight(&text, ss);
let html = styled_line_to_highlighted_html(&highlighted, background);
return Event::Html(Owned(html));
}
// Business as usual
Event::Text(text)
}
Event::Start(Tag::CodeBlock(ref info)) => {
if !context.config.highlight_code {
return Event::Html(Borrowed("<pre><code>"));
}
let theme = &THEME_SET.themes[&context.config.highlight_theme];
highlighter = Some(get_highlighter(info, &context.config));
// This selects the background color the same way that start_coloured_html_snippet does
let color =
theme.settings.background.unwrap_or(::syntect::highlighting::Color::WHITE);
background = IncludeBackground::IfDifferent(color);
let snippet = start_highlighted_html_snippet(theme);
Event::Html(Owned(snippet.0))
}
Event::End(Tag::CodeBlock(_)) => {
if !context.config.highlight_code {
return Event::Html(Borrowed("</code></pre>\n"));
}
// reset highlight and close the code block
highlighter = None;
Event::Html(Borrowed("</pre>"))
}
Event::Start(Tag::Image(src, title)) => {
if is_colocated_asset_link(&src) {
return Event::Start(Tag::Image(
Owned(format!("{}{}", context.current_page_permalink, src)),
title,
));
}
Event::Start(Tag::Image(src, title))
}
Event::Start(Tag::Link(link, title)) => {
// A few situations here: // A few situations here:
// - it could be a relative link (starting with `./`) // - it could be a relative link (starting with `./`)
// - it could be a link to a co-located asset // - it could be a link to a co-located asset
// - it could be a normal link // - it could be a normal link
// - any of those can be in a header or not: if it's in a header let result = if link.starts_with("./") {
// we need to append to a string
let fixed_link = if link.starts_with("./") {
match resolve_internal_link(&link, context.permalinks) { match resolve_internal_link(&link, context.permalinks) {
Ok(url) => url, Ok(url) => url,
Err(_) => { Err(_) => {
error = Some(format!("Relative link {} not found.", link).into()); return Err(format!("Relative link {} not found.", link).into());
return Event::Html(Borrowed(""));
} }
} }
} else if is_colocated_asset_link(&link) { } else if is_colocated_asset_link(&link) {
@ -166,77 +90,187 @@ pub fn markdown_to_html(content: &str, context: &RenderContext) -> Result<Render
if res.is_valid() { if res.is_valid() {
link.to_string() link.to_string()
} else { } else {
error = Some( return Err(format!("Link {} is not valid: {}", link, res.message()).into());
format!("Link {} is not valid: {}", link, res.message()).into(),
);
String::new()
} }
} else { } else {
link.to_string() link.to_string()
}; };
Ok(result)
}
if in_header { /// get only text in a slice of events
let html = if title.is_empty() { fn get_text(parser_slice: &[Event]) -> String {
format!("<a href=\"{}\">", fixed_link) let mut title = String::new();
} else {
format!("<a href=\"{}\" title=\"{}\">", fixed_link, title) for event in parser_slice.iter() {
}; if let Event::Text(text) = event {
temp_header.add_html(&html); title += text;
return Event::Html(Borrowed("")); }
} }
Event::Start(Tag::Link(Owned(fixed_link), title)) title
} }
Event::End(Tag::Link(_, _)) => {
if in_header { fn get_header_refs(events: &[Event]) -> Vec<HeaderRef> {
temp_header.add_html("</a>"); let mut header_refs = vec![];
return Event::Html(Borrowed(""));
} for (i, event) in events.iter().enumerate() {
event match event {
} Event::Start(Tag::Header(level)) => {
Event::Start(Tag::Code) => { header_refs.push(HeaderRef::new(i, *level));
if in_header {
temp_header.add_html("<code>");
return Event::Html(Borrowed(""));
}
event
}
Event::End(Tag::Code) => {
if in_header {
temp_header.add_html("</code>");
return Event::Html(Borrowed(""));
}
event
}
Event::Start(Tag::Header(num)) => {
in_header = true;
temp_header = TempHeader::new(num);
Event::Html(Borrowed(""))
} }
Event::End(Tag::Header(_)) => { Event::End(Tag::Header(_)) => {
// End of a header, reset all the things and return the header string let msg = "Header end before start?";
header_refs.last_mut().expect(msg).end_idx = i;
}
_ => (),
}
}
let id = find_anchor(&anchors, slugify(&temp_header.title), 0); header_refs
anchors.push(id.clone()); }
temp_header.permalink = format!("{}#{}", context.current_page_permalink, id);
temp_header.id = id;
in_header = false; pub fn markdown_to_html(content: &str, context: &RenderContext) -> Result<Rendered> {
header_created = false; // the rendered html
let val = temp_header.to_string(context.tera, context.insert_anchor); let mut html = String::with_capacity(content.len());
headers.push(temp_header.clone()); // Set while parsing
temp_header = TempHeader::default(); let mut error = None;
Event::Html(Owned(val))
let mut background = IncludeBackground::Yes;
let mut highlighter: Option<(HighlightLines, bool)> = None;
let mut inserted_anchors: Vec<String> = vec![];
let mut headers: Vec<Header> = vec![];
let mut opts = Options::empty();
let mut has_summary = false;
opts.insert(Options::ENABLE_TABLES);
opts.insert(Options::ENABLE_FOOTNOTES);
{
let mut events = Parser::new_ext(content, opts)
.map(|event| {
match event {
Event::Text(text) => {
// if we are in the middle of a code block
if let Some((ref mut highlighter, in_extra)) = highlighter {
let highlighted = if in_extra {
if let Some(ref extra) = context.config.extra_syntax_set {
highlighter.highlight(&text, &extra)
} else {
unreachable!(
"Got a highlighter from extra syntaxes but no extra?"
);
}
} else {
highlighter.highlight(&text, &SYNTAX_SET)
};
//let highlighted = &highlighter.highlight(&text, ss);
let html = styled_line_to_highlighted_html(&highlighted, background);
return Event::Html(html.into());
}
// Business as usual
Event::Text(text)
}
Event::Start(Tag::CodeBlock(ref info)) => {
if !context.config.highlight_code {
return Event::Html("<pre><code>".into());
}
let theme = &THEME_SET.themes[&context.config.highlight_theme];
highlighter = Some(get_highlighter(info, &context.config));
// This selects the background color the same way that start_coloured_html_snippet does
let color = theme
.settings
.background
.unwrap_or(::syntect::highlighting::Color::WHITE);
background = IncludeBackground::IfDifferent(color);
let snippet = start_highlighted_html_snippet(theme);
Event::Html(snippet.0.into())
}
Event::End(Tag::CodeBlock(_)) => {
if !context.config.highlight_code {
return Event::Html("</code></pre>\n".into());
}
// reset highlight and close the code block
highlighter = None;
Event::Html("</pre>".into())
}
Event::Start(Tag::Image(link_type, src, title)) => {
if is_colocated_asset_link(&src) {
let link = format!("{}{}", context.current_page_permalink, &*src);
return Event::Start(Tag::Image(link_type, link.into(), title));
}
Event::Start(Tag::Image(link_type, src, title))
}
Event::Start(Tag::Link(link_type, link, title)) => {
let fixed_link = match fix_link(link_type, &link, context) {
Ok(fixed_link) => fixed_link,
Err(err) => {
error = Some(err);
return Event::Html("".into());
}
};
Event::Start(Tag::Link(link_type, fixed_link.into(), title))
} }
Event::Html(ref markup) if markup.contains("<!-- more -->") => { Event::Html(ref markup) if markup.contains("<!-- more -->") => {
has_summary = true; has_summary = true;
Event::Html(Borrowed(CONTINUE_READING)) Event::Html(CONTINUE_READING.into())
} }
_ => event, _ => event,
} }
}); })
.collect::<Vec<_>>(); // We need to collect the events to make a second pass
cmark::html::push_html(&mut html, parser); let header_refs = get_header_refs(&events);
let mut anchors_to_insert = vec![];
for header_ref in header_refs {
let start_idx = header_ref.start_idx;
let end_idx = header_ref.end_idx;
let title = get_text(&events[start_idx + 1..end_idx]);
let id = find_anchor(&inserted_anchors, slugify(&title), 0);
inserted_anchors.push(id.clone());
// insert `id` to the tag
let html = format!("<h{lvl} id=\"{id}\">", lvl = header_ref.level, id = id);
events[start_idx] = Event::Html(html.into());
// generate anchors and places to insert them
if context.insert_anchor != InsertAnchor::None {
let anchor_idx = match context.insert_anchor {
InsertAnchor::Left => start_idx + 1,
InsertAnchor::Right => end_idx,
InsertAnchor::None => 0, // Not important
};
let mut c = tera::Context::new();
c.insert("id", &id);
let anchor_link = utils::templates::render_template(
&ANCHOR_LINK_TEMPLATE,
context.tera,
c,
&None,
)
.map_err(|e| Error::chain("Failed to render anchor link template", e))?;
anchors_to_insert.push((anchor_idx, Event::Html(anchor_link.into())));
}
// record header to make table of contents
let permalink = format!("{}#{}", context.current_page_permalink, id);
let h = Header { level: header_ref.level, id, permalink, title, children: Vec::new() };
headers.push(h);
}
if context.insert_anchor != InsertAnchor::None {
events.insert_many(anchors_to_insert);
}
cmark::html::push_html(&mut html, events.into_iter());
} }
if let Some(e) = error { if let Some(e) = error {
@ -245,7 +279,7 @@ pub fn markdown_to_html(content: &str, context: &RenderContext) -> Result<Render
Ok(Rendered { Ok(Rendered {
summary_len: if has_summary { html.find(CONTINUE_READING) } else { None }, summary_len: if has_summary { html.find(CONTINUE_READING) } else { None },
body: html, body: html,
toc: make_table_of_contents(&headers), toc: make_table_of_contents(headers),
}) })
} }
} }

View file

@ -4,7 +4,7 @@ use regex::Regex;
use tera::{to_value, Context, Map, Value}; use tera::{to_value, Context, Map, Value};
use context::RenderContext; use context::RenderContext;
use errors::{Result, ResultExt}; use errors::{Error, Result};
// This include forces recompiling this source file if the grammar file changes. // This include forces recompiling this source file if the grammar file changes.
// Uncomment it when doing changes to the .pest file // Uncomment it when doing changes to the .pest file
@ -58,7 +58,7 @@ fn parse_shortcode_call(pair: Pair<Rule>) -> (String, Map<String, Value>) {
for p in pair.into_inner() { for p in pair.into_inner() {
match p.as_rule() { match p.as_rule() {
Rule::ident => { Rule::ident => {
name = Some(p.into_span().as_str().to_string()); name = Some(p.as_span().as_str().to_string());
} }
Rule::kwarg => { Rule::kwarg => {
let mut arg_name = None; let mut arg_name = None;
@ -66,7 +66,7 @@ fn parse_shortcode_call(pair: Pair<Rule>) -> (String, Map<String, Value>) {
for p2 in p.into_inner() { for p2 in p.into_inner() {
match p2.as_rule() { match p2.as_rule() {
Rule::ident => { Rule::ident => {
arg_name = Some(p2.into_span().as_str().to_string()); arg_name = Some(p2.as_span().as_str().to_string());
} }
Rule::literal => { Rule::literal => {
arg_val = Some(parse_literal(p2)); arg_val = Some(parse_literal(p2));
@ -108,15 +108,14 @@ fn render_shortcode(
} }
if let Some(ref b) = body { if let Some(ref b) = body {
// Trimming right to avoid most shortcodes with bodies ending up with a HTML new line // Trimming right to avoid most shortcodes with bodies ending up with a HTML new line
tera_context.insert("body", b.trim_right()); tera_context.insert("body", b.trim_end());
} }
tera_context.extend(context.tera_context.clone()); tera_context.extend(context.tera_context.clone());
let tpl_name = format!("shortcodes/{}.html", name);
let res = context let template_name = format!("shortcodes/{}.html", name);
.tera
.render(&tpl_name, &tera_context) let res = utils::templates::render_template(&template_name, &context.tera, tera_context, &None)
.chain_err(|| format!("Failed to render {} shortcode", name))?; .map_err(|e| Error::chain(format!("Failed to render {} shortcode", name), e))?;
// Small hack to avoid having multiple blank lines because of Tera tags for example // Small hack to avoid having multiple blank lines because of Tera tags for example
// A blank like will cause the markdown parser to think we're out of HTML and start looking // A blank like will cause the markdown parser to think we're out of HTML and start looking
@ -170,7 +169,7 @@ pub fn render_shortcodes(content: &str, context: &RenderContext) -> Result<Strin
// We have at least a `page` pair // We have at least a `page` pair
for p in pairs.next().unwrap().into_inner() { for p in pairs.next().unwrap().into_inner() {
match p.as_rule() { match p.as_rule() {
Rule::text => res.push_str(p.into_span().as_str()), Rule::text => res.push_str(p.as_span().as_str()),
Rule::inline_shortcode => { Rule::inline_shortcode => {
let (name, args) = parse_shortcode_call(p); let (name, args) = parse_shortcode_call(p);
res.push_str(&render_shortcode(&name, &args, context, None)?); res.push_str(&render_shortcode(&name, &args, context, None)?);
@ -180,12 +179,12 @@ pub fn render_shortcodes(content: &str, context: &RenderContext) -> Result<Strin
// 3 items in inner: call, body, end // 3 items in inner: call, body, end
// we don't care about the closing tag // we don't care about the closing tag
let (name, args) = parse_shortcode_call(inner.next().unwrap()); let (name, args) = parse_shortcode_call(inner.next().unwrap());
let body = inner.next().unwrap().into_span().as_str(); let body = inner.next().unwrap().as_span().as_str();
res.push_str(&render_shortcode(&name, &args, context, Some(body))?); res.push_str(&render_shortcode(&name, &args, context, Some(body))?);
} }
Rule::ignored_inline_shortcode => { Rule::ignored_inline_shortcode => {
res.push_str( res.push_str(
&p.into_span().as_str().replacen("{{/*", "{{", 1).replacen("*/}}", "}}", 1), &p.as_span().as_str().replacen("{{/*", "{{", 1).replacen("*/}}", "}}", 1),
); );
} }
Rule::ignored_shortcode_with_body => { Rule::ignored_shortcode_with_body => {
@ -193,13 +192,13 @@ pub fn render_shortcodes(content: &str, context: &RenderContext) -> Result<Strin
match p2.as_rule() { match p2.as_rule() {
Rule::ignored_sc_body_start | Rule::ignored_sc_body_end => { Rule::ignored_sc_body_start | Rule::ignored_sc_body_end => {
res.push_str( res.push_str(
&p2.into_span() &p2.as_span()
.as_str() .as_str()
.replacen("{%/*", "{%", 1) .replacen("{%/*", "{%", 1)
.replacen("*/%}", "%}", 1), .replacen("*/%}", "%}", 1),
); );
} }
Rule::text_in_ignored_body_sc => res.push_str(p2.into_span().as_str()), Rule::text_in_ignored_body_sc => res.push_str(p2.as_span().as_str()),
_ => unreachable!("Got something weird in an ignored shortcode: {:?}", p2), _ => unreachable!("Got something weird in an ignored shortcode: {:?}", p2),
} }
} }
@ -231,7 +230,7 @@ mod tests {
panic!(); panic!();
} }
assert!(res.is_ok()); assert!(res.is_ok());
assert_eq!(res.unwrap().last().unwrap().into_span().end(), $input.len()); assert_eq!(res.unwrap().last().unwrap().as_span().end(), $input.len());
}; };
} }

View file

@ -1,160 +1,59 @@
use front_matter::InsertAnchor; /// Populated while receiving events from the markdown parser
use tera::{Context as TeraContext, Tera};
#[derive(Debug, PartialEq, Clone, Serialize)] #[derive(Debug, PartialEq, Clone, Serialize)]
pub struct Header { pub struct Header {
#[serde(skip_serializing)] #[serde(skip_serializing)]
pub level: i32, pub level: i32,
pub id: String, pub id: String,
pub title: String,
pub permalink: String, pub permalink: String,
pub title: String,
pub children: Vec<Header>, pub children: Vec<Header>,
} }
impl Header { impl Header {
pub fn from_temp_header(tmp: &TempHeader, children: Vec<Header>) -> Header { pub fn new(level: i32) -> Header {
Header { Header {
level: tmp.level,
id: tmp.id.clone(),
title: tmp.title.clone(),
permalink: tmp.permalink.clone(),
children,
}
}
}
/// Populated while receiving events from the markdown parser
#[derive(Debug, PartialEq, Clone)]
pub struct TempHeader {
pub level: i32,
pub id: String,
pub permalink: String,
pub title: String,
pub html: String,
}
impl TempHeader {
pub fn new(level: i32) -> TempHeader {
TempHeader {
level, level,
id: String::new(), id: String::new(),
permalink: String::new(), permalink: String::new(),
title: String::new(), title: String::new(),
html: String::new(), children: Vec::new(),
}
}
pub fn add_html(&mut self, val: &str) {
self.html += val;
}
pub fn add_text(&mut self, val: &str) {
self.html += val;
self.title += val;
}
/// Transform all the information we have about this header into the HTML string for it
pub fn to_string(&self, tera: &Tera, insert_anchor: InsertAnchor) -> String {
let anchor_link = if insert_anchor != InsertAnchor::None {
let mut c = TeraContext::new();
c.insert("id", &self.id);
tera.render("anchor-link.html", &c).unwrap()
} else {
String::new()
};
match insert_anchor {
InsertAnchor::None => format!(
"<h{lvl} id=\"{id}\">{t}</h{lvl}>\n",
lvl = self.level,
t = self.html,
id = self.id
),
InsertAnchor::Left => format!(
"<h{lvl} id=\"{id}\">{a}{t}</h{lvl}>\n",
lvl = self.level,
a = anchor_link,
t = self.html,
id = self.id
),
InsertAnchor::Right => format!(
"<h{lvl} id=\"{id}\">{t}{a}</h{lvl}>\n",
lvl = self.level,
a = anchor_link,
t = self.html,
id = self.id
),
} }
} }
} }
impl Default for TempHeader { impl Default for Header {
fn default() -> Self { fn default() -> Self {
TempHeader::new(0) Header::new(0)
} }
} }
/// Recursively finds children of a header
fn find_children(
parent_level: i32,
start_at: usize,
temp_headers: &[TempHeader],
) -> (usize, Vec<Header>) {
let mut headers = vec![];
let mut start_at = start_at;
// If we have children, we will need to skip some headers since they are already inserted
let mut to_skip = 0;
for h in &temp_headers[start_at..] {
// stop when we encounter a title at the same level or higher
// than the parent one. Here a lower integer is considered higher as we are talking about
// HTML headers: h1, h2, h3, h4, h5 and h6
if h.level <= parent_level {
return (start_at, headers);
}
// Do we need to skip some headers?
if to_skip > 0 {
to_skip -= 1;
continue;
}
let (end, children) = find_children(h.level, start_at + 1, temp_headers);
headers.push(Header::from_temp_header(h, children));
// we didn't find any children
if end == start_at {
start_at += 1;
to_skip = 0;
} else {
// calculates how many we need to skip. Since the find_children start_at starts at 1,
// we need to remove 1 to ensure correctness
to_skip = end - start_at - 1;
start_at = end;
}
// we don't want to index out of bounds
if start_at + 1 > temp_headers.len() {
return (start_at, headers);
}
}
(start_at, headers)
}
/// Converts the flat temp headers into a nested set of headers /// Converts the flat temp headers into a nested set of headers
/// representing the hierarchy /// representing the hierarchy
pub fn make_table_of_contents(temp_headers: &[TempHeader]) -> Vec<Header> { pub fn make_table_of_contents(headers: Vec<Header>) -> Vec<Header> {
let mut toc = vec![]; let mut toc = vec![];
let mut start_idx = 0; 'parent: for header in headers {
for (i, h) in temp_headers.iter().enumerate() { if toc.is_empty() {
if i < start_idx { toc.push(header);
continue; continue;
} }
let (end_idx, children) = find_children(h.level, start_idx + 1, temp_headers);
start_idx = end_idx; // See if we have to insert as a child of a previous header
toc.push(Header::from_temp_header(h, children)); for h in toc.iter_mut().rev() {
// Look in its children first
for child in h.children.iter_mut().rev() {
if header.level > child.level {
child.children.push(header);
continue 'parent;
}
}
if header.level > h.level {
h.children.push(header);
continue 'parent;
}
}
// Nop, just insert it
toc.push(header)
} }
toc toc
@ -166,25 +65,25 @@ mod tests {
#[test] #[test]
fn can_make_basic_toc() { fn can_make_basic_toc() {
let input = vec![TempHeader::new(1), TempHeader::new(1), TempHeader::new(1)]; let input = vec![Header::new(1), Header::new(1), Header::new(1)];
let toc = make_table_of_contents(&input); let toc = make_table_of_contents(input);
assert_eq!(toc.len(), 3); assert_eq!(toc.len(), 3);
} }
#[test] #[test]
fn can_make_more_complex_toc() { fn can_make_more_complex_toc() {
let input = vec![ let input = vec![
TempHeader::new(1), Header::new(1),
TempHeader::new(2), Header::new(2),
TempHeader::new(2), Header::new(2),
TempHeader::new(3), Header::new(3),
TempHeader::new(2), Header::new(2),
TempHeader::new(1), Header::new(1),
TempHeader::new(2), Header::new(2),
TempHeader::new(3), Header::new(3),
TempHeader::new(3), Header::new(3),
]; ];
let toc = make_table_of_contents(&input); let toc = make_table_of_contents(input);
assert_eq!(toc.len(), 2); assert_eq!(toc.len(), 2);
assert_eq!(toc[0].children.len(), 3); assert_eq!(toc[0].children.len(), 3);
assert_eq!(toc[1].children.len(), 1); assert_eq!(toc[1].children.len(), 1);
@ -195,15 +94,16 @@ mod tests {
#[test] #[test]
fn can_make_messy_toc() { fn can_make_messy_toc() {
let input = vec![ let input = vec![
TempHeader::new(3), Header::new(3),
TempHeader::new(2), Header::new(2),
TempHeader::new(2), Header::new(2),
TempHeader::new(3), Header::new(3),
TempHeader::new(2), Header::new(2),
TempHeader::new(1), Header::new(1),
TempHeader::new(4), Header::new(4),
]; ];
let toc = make_table_of_contents(&input); let toc = make_table_of_contents(input);
println!("{:#?}", toc);
assert_eq!(toc.len(), 5); assert_eq!(toc.len(), 5);
assert_eq!(toc[2].children.len(), 1); assert_eq!(toc[2].children.len(), 1);
assert_eq!(toc[4].children.len(), 1); assert_eq!(toc[4].children.len(), 1);

View file

@ -44,7 +44,7 @@ fn can_highlight_code_block_no_lang() {
let res = render_content("```\n$ gutenberg server\n$ ping\n```", &context).unwrap(); let res = render_content("```\n$ gutenberg server\n$ ping\n```", &context).unwrap();
assert_eq!( assert_eq!(
res.body, res.body,
"<pre style=\"background-color:#2b303b;\">\n<span style=\"color:#c0c5ce;\">$ gutenberg server\n</span><span style=\"color:#c0c5ce;\">$ ping\n</span></pre>" "<pre style=\"background-color:#2b303b;\">\n<span style=\"color:#c0c5ce;\">$ gutenberg server\n$ ping\n</span></pre>"
); );
} }
@ -375,6 +375,19 @@ fn can_insert_anchor_right() {
); );
} }
#[test]
fn can_insert_anchor_for_multi_header() {
let permalinks_ctx = HashMap::new();
let config = Config::default();
let context = RenderContext::new(&ZOLA_TERA, &config, "", &permalinks_ctx, InsertAnchor::Right);
let res = render_content("# Hello\n# World", &context).unwrap();
assert_eq!(
res.body,
"<h1 id=\"hello\">Hello<a class=\"zola-anchor\" href=\"#hello\" aria-label=\"Anchor link for: hello\">🔗</a>\n</h1>\n\
<h1 id=\"world\">World<a class=\"zola-anchor\" href=\"#world\" aria-label=\"Anchor link for: world\">🔗</a>\n</h1>\n"
);
}
// See https://github.com/Keats/gutenberg/issues/42 // See https://github.com/Keats/gutenberg/issues/42
#[test] #[test]
fn can_insert_anchor_with_exclamation_mark() { fn can_insert_anchor_with_exclamation_mark() {
@ -522,6 +535,47 @@ fn can_understand_link_with_title_in_header() {
); );
} }
#[test]
fn can_understand_emphasis_in_header() {
let permalinks_ctx = HashMap::new();
let config = Config::default();
let context = RenderContext::new(&ZOLA_TERA, &config, "", &permalinks_ctx, InsertAnchor::None);
let res = render_content("# *Emphasis* text", &context).unwrap();
assert_eq!(res.body, "<h1 id=\"emphasis-text\"><em>Emphasis</em> text</h1>\n");
}
#[test]
fn can_understand_strong_in_header() {
let permalinks_ctx = HashMap::new();
let config = Config::default();
let context = RenderContext::new(&ZOLA_TERA, &config, "", &permalinks_ctx, InsertAnchor::None);
let res = render_content("# **Strong** text", &context).unwrap();
assert_eq!(res.body, "<h1 id=\"strong-text\"><strong>Strong</strong> text</h1>\n");
}
#[test]
fn can_understand_code_in_header() {
let permalinks_ctx = HashMap::new();
let config = Config::default();
let context = RenderContext::new(&ZOLA_TERA, &config, "", &permalinks_ctx, InsertAnchor::None);
let res = render_content("# `Code` text", &context).unwrap();
assert_eq!(res.body, "<h1 id=\"code-text\"><code>Code</code> text</h1>\n");
}
// See https://github.com/getzola/zola/issues/569
#[test]
fn can_understand_footnote_in_header() {
let permalinks_ctx = HashMap::new();
let config = Config::default();
let context = RenderContext::new(&ZOLA_TERA, &config, "", &permalinks_ctx, InsertAnchor::None);
let res = render_content("# text [^1] there\n[^1]: footnote", &context).unwrap();
assert_eq!(res.body, r##"<h1 id="text-there">text <sup class="footnote-reference"><a href="#1">1</a></sup> there</h1>
<div class="footnote-definition" id="1"><sup class="footnote-definition-label">1</sup>
<p>footnote</p>
</div>
"##);
}
#[test] #[test]
fn can_make_valid_relative_link_in_header() { fn can_make_valid_relative_link_in_header() {
let mut permalinks = HashMap::new(); let mut permalinks = HashMap::new();
@ -633,7 +687,7 @@ fn can_show_error_message_for_invalid_external_links() {
let res = render_content("[a link](http://google.comy)", &context); let res = render_content("[a link](http://google.comy)", &context);
assert!(res.is_err()); assert!(res.is_err());
let err = res.unwrap_err(); let err = res.unwrap_err();
assert!(err.description().contains("Link http://google.comy is not valid")); assert!(format!("{}", err).contains("Link http://google.comy is not valid"));
} }
#[test] #[test]
@ -675,17 +729,25 @@ fn can_handle_summaries() {
let config = Config::default(); let config = Config::default();
let context = RenderContext::new(&tera_ctx, &config, "", &permalinks_ctx, InsertAnchor::None); let context = RenderContext::new(&tera_ctx, &config, "", &permalinks_ctx, InsertAnchor::None);
let res = render_content( let res = render_content(
"Hello [world]\n\n<!-- more -->\n\nBla bla\n\n[world]: https://vincent.is/about/", r#"
Hello [My site][world]
<!-- more -->
Bla bla
[world]: https://vincentprouillet.com
"#,
&context, &context,
) )
.unwrap(); .unwrap();
assert_eq!( assert_eq!(
res.body, res.body,
"<p>Hello <a href=\"https://vincent.is/about/\">world</a></p>\n<p><a name=\"continue-reading\"></a></p>\n<p>Bla bla</p>\n" "<p>Hello <a href=\"https://vincentprouillet.com\">My site</a></p>\n<p id=\"zola-continue-reading\"><a name=\"continue-reading\"></a></p>\n<p>Bla bla</p>\n"
); );
assert_eq!( assert_eq!(
res.summary_len, res.summary_len,
Some("<p>Hello <a href=\"https://vincent.is/about/\">world</a></p>\n".len()) Some("<p>Hello <a href=\"https://vincentprouillet.com/\">My site</a></p>".len())
); );
} }
@ -721,3 +783,31 @@ fn doesnt_try_to_highlight_content_from_shortcode() {
let res = render_content(markdown_string, &context).unwrap(); let res = render_content(markdown_string, &context).unwrap();
assert_eq!(res.body, expected); assert_eq!(res.body, expected);
} }
// TODO: re-enable once it's fixed in Tera
// https://github.com/Keats/tera/issues/373
//#[test]
//fn can_split_lines_shortcode_body() {
// let permalinks_ctx = HashMap::new();
// let mut tera = Tera::default();
// tera.extend(&ZOLA_TERA).unwrap();
//
// let shortcode = r#"{{ body | split(pat="\n") }}"#;
//
// let markdown_string = r#"
//{% alert() %}
//multi
//ple
//lines
//{% end %}
// "#;
//
// let expected = r#"<p>["multi", "ple", "lines"]</p>"#;
//
// tera.add_raw_template(&format!("shortcodes/{}.html", "alert"), shortcode).unwrap();
// let config = Config::default();
// let context = RenderContext::new(&tera, &config, "", &permalinks_ctx, InsertAnchor::None);
//
// let res = render_content(markdown_string, &context).unwrap();
// assert_eq!(res.body, expected);
//}

View file

@ -5,7 +5,7 @@ authors = ["Vincent Prouillet <prouillet.vincent@gmail.com>"]
[dependencies] [dependencies]
elasticlunr-rs = "2" elasticlunr-rs = "2"
ammonia = "1" ammonia = "2"
lazy_static = "1" lazy_static = "1"
errors = { path = "../errors" } errors = { path = "../errors" }

View file

@ -4,7 +4,7 @@ version = "0.1.0"
authors = ["Vincent Prouillet <prouillet.vincent@gmail.com>"] authors = ["Vincent Prouillet <prouillet.vincent@gmail.com>"]
[dependencies] [dependencies]
tera = "0.11" tera = "1.0.0-alpha.3"
glob = "0.2" glob = "0.2"
rayon = "1" rayon = "1"
serde = "1" serde = "1"

View file

@ -169,6 +169,7 @@ if __name__ == "__main__":
gen_site("medium-blog", [""], 250, is_blog=True) gen_site("medium-blog", [""], 250, is_blog=True)
gen_site("big-blog", [""], 1000, is_blog=True) gen_site("big-blog", [""], 1000, is_blog=True)
gen_site("huge-blog", [""], 10000, is_blog=True) gen_site("huge-blog", [""], 10000, is_blog=True)
gen_site("extra-huge-blog", [""], 100000, is_blog=True)
gen_site("small-kb", ["help", "help1", "help2", "help3", "help4", "help5", "help6", "help7", "help8", "help9"], 10) gen_site("small-kb", ["help", "help1", "help2", "help3", "help4", "help5", "help6", "help7", "help8", "help9"], 10)
gen_site("medium-kb", ["help", "help1", "help2", "help3", "help4", "help5", "help6", "help7", "help8", "help9"], 100) gen_site("medium-kb", ["help", "help1", "help2", "help3", "help4", "help5", "help6", "help7", "help8", "help9"], 100)

View file

@ -43,7 +43,7 @@ fn bench_render_rss_feed(b: &mut test::Bencher) {
let tmp_dir = tempdir().expect("create temp dir"); let tmp_dir = tempdir().expect("create temp dir");
let public = &tmp_dir.path().join("public"); let public = &tmp_dir.path().join("public");
site.set_output_path(&public); site.set_output_path(&public);
b.iter(|| site.render_rss_feed(site.library.pages_values(), None).unwrap()); b.iter(|| site.render_rss_feed(site.library.read().unwrap().pages_values(), None).unwrap());
} }
#[bench] #[bench]
@ -61,8 +61,9 @@ fn bench_render_paginated(b: &mut test::Bencher) {
let tmp_dir = tempdir().expect("create temp dir"); let tmp_dir = tempdir().expect("create temp dir");
let public = &tmp_dir.path().join("public"); let public = &tmp_dir.path().join("public");
site.set_output_path(&public); site.set_output_path(&public);
let section = site.library.sections_values()[0]; let library = site.library.read().unwrap();
let paginator = Paginator::from_section(&section, &site.library); let section = library.sections_values()[0];
let paginator = Paginator::from_section(&section, &library);
b.iter(|| site.render_paginated(public, &paginator)); b.iter(|| site.render_paginated(public, &paginator));
} }

View file

@ -19,10 +19,13 @@ extern crate utils;
#[cfg(test)] #[cfg(test)]
extern crate tempfile; extern crate tempfile;
use std::collections::HashMap;
mod sitemap;
use std::collections::{HashMap};
use std::fs::{copy, create_dir_all, remove_dir_all}; use std::fs::{copy, create_dir_all, remove_dir_all};
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex, RwLock};
use glob::glob; use glob::glob;
use rayon::prelude::*; use rayon::prelude::*;
@ -30,7 +33,7 @@ use sass_rs::{compile_file, Options as SassOptions, OutputStyle};
use tera::{Context, Tera}; use tera::{Context, Tera};
use config::{get_config, Config}; use config::{get_config, Config};
use errors::{Result, ResultExt}; use errors::{Error, Result};
use front_matter::InsertAnchor; use front_matter::InsertAnchor;
use library::{ use library::{
find_taxonomies, sort_actual_pages_by_date, Library, Page, Paginator, Section, Taxonomy, find_taxonomies, sort_actual_pages_by_date, Library, Page, Paginator, Section, Taxonomy,
@ -40,20 +43,6 @@ use utils::fs::{copy_directory, create_directory, create_file, ensure_directory_
use utils::net::get_available_port; use utils::net::get_available_port;
use utils::templates::{render_template, rewrite_theme_paths}; use utils::templates::{render_template, rewrite_theme_paths};
/// The sitemap only needs links and potentially date so we trim down
/// all pages to only that
#[derive(Debug, Serialize)]
struct SitemapEntry {
permalink: String,
date: Option<String>,
}
impl SitemapEntry {
pub fn new(permalink: String, date: Option<String>) -> SitemapEntry {
SitemapEntry { permalink, date }
}
}
#[derive(Debug)] #[derive(Debug)]
pub struct Site { pub struct Site {
/// The base path of the zola site /// The base path of the zola site
@ -72,12 +61,12 @@ pub struct Site {
/// We need that if there are relative links in the content that need to be resolved /// We need that if there are relative links in the content that need to be resolved
pub permalinks: HashMap<String, String>, pub permalinks: HashMap<String, String>,
/// Contains all pages and sections of the site /// Contains all pages and sections of the site
pub library: Library, pub library: Arc<RwLock<Library>>,
} }
impl Site { impl Site {
/// Parse a site at the given path. Defaults to the current dir /// Parse a site at the given path. Defaults to the current dir
/// Passing in a path is only used in tests /// Passing in a path is possible using the `base-path` command line build option
pub fn new<P: AsRef<Path>>(path: P, config_file: &str) -> Result<Site> { pub fn new<P: AsRef<Path>>(path: P, config_file: &str) -> Result<Site> {
let path = path.as_ref(); let path = path.as_ref();
let mut config = get_config(path, config_file); let mut config = get_config(path, config_file);
@ -87,7 +76,8 @@ impl Site {
format!("{}/{}", path.to_string_lossy().replace("\\", "/"), "templates/**/*.*ml"); format!("{}/{}", path.to_string_lossy().replace("\\", "/"), "templates/**/*.*ml");
// Only parsing as we might be extending templates from themes and that would error // Only parsing as we might be extending templates from themes and that would error
// as we haven't loaded them yet // as we haven't loaded them yet
let mut tera = Tera::parse(&tpl_glob).chain_err(|| "Error parsing templates")?; let mut tera =
Tera::parse(&tpl_glob).map_err(|e| Error::chain("Error parsing templates", e))?;
if let Some(theme) = config.theme.clone() { if let Some(theme) = config.theme.clone() {
// Grab data from the extra section of the theme // Grab data from the extra section of the theme
config.merge_with_theme(&path.join("themes").join(&theme).join("theme.toml"))?; config.merge_with_theme(&path.join("themes").join(&theme).join("theme.toml"))?;
@ -103,10 +93,10 @@ impl Site {
path.to_string_lossy().replace("\\", "/"), path.to_string_lossy().replace("\\", "/"),
format!("themes/{}/templates/**/*.*ml", theme) format!("themes/{}/templates/**/*.*ml", theme)
); );
let mut tera_theme = let mut tera_theme = Tera::parse(&theme_tpl_glob)
Tera::parse(&theme_tpl_glob).chain_err(|| "Error parsing templates from themes")?; .map_err(|e| Error::chain("Error parsing templates from themes", e))?;
rewrite_theme_paths(&mut tera_theme, &theme); rewrite_theme_paths(&mut tera_theme, &theme);
// TODO: same as below // TODO: we do that twice, make it dry?
if theme_path.join("templates").join("robots.txt").exists() { if theme_path.join("templates").join("robots.txt").exists() {
tera_theme tera_theme
.add_template_file(theme_path.join("templates").join("robots.txt"), None)?; .add_template_file(theme_path.join("templates").join("robots.txt"), None)?;
@ -141,15 +131,23 @@ impl Site {
taxonomies: Vec::new(), taxonomies: Vec::new(),
permalinks: HashMap::new(), permalinks: HashMap::new(),
// We will allocate it properly later on // We will allocate it properly later on
library: Library::new(0, 0), library: Arc::new(RwLock::new(Library::new(0, 0, false))),
}; };
Ok(site) Ok(site)
} }
/// The index section is ALWAYS at that path /// The index sections are ALWAYS at those paths
pub fn index_section_path(&self) -> PathBuf { /// There are one index section for the basic language + 1 per language
self.content_path.join("_index.md") fn index_section_paths(&self) -> Vec<(PathBuf, Option<String>)> {
let mut res = vec![(self.content_path.join("_index.md"), None)];
for language in &self.config.languages {
res.push((
self.content_path.join(format!("_index.{}.md", language.code)),
Some(language.code.clone()),
));
}
res
} }
/// We avoid the port the server is going to use as it's not bound yet /// We avoid the port the server is going to use as it's not bound yet
@ -159,13 +157,13 @@ impl Site {
self.live_reload = get_available_port(port_to_avoid); self.live_reload = get_available_port(port_to_avoid);
} }
/// Get all the orphan (== without section) pages in the site /// Get the number of orphan (== without section) pages in the site
pub fn get_all_orphan_pages(&self) -> Vec<&Page> { pub fn get_number_orphan_pages(&self) -> usize {
self.library.get_all_orphan_pages() self.library.read().unwrap().get_all_orphan_pages().len()
} }
pub fn set_base_url(&mut self, base_url: String) { pub fn set_base_url(&mut self, base_url: String) {
let mut imageproc = self.imageproc.lock().unwrap(); let mut imageproc = self.imageproc.lock().expect("Couldn't lock imageproc (set_base_url)");
imageproc.set_base_url(&base_url); imageproc.set_base_url(&base_url);
self.config.base_url = base_url; self.config.base_url = base_url;
} }
@ -181,12 +179,18 @@ impl Site {
let content_glob = format!("{}/{}", base_path, "content/**/*.md"); let content_glob = format!("{}/{}", base_path, "content/**/*.md");
let (section_entries, page_entries): (Vec<_>, Vec<_>) = glob(&content_glob) let (section_entries, page_entries): (Vec<_>, Vec<_>) = glob(&content_glob)
.unwrap() .expect("Invalid glob")
.filter_map(|e| e.ok()) .filter_map(|e| e.ok())
.filter(|e| !e.as_path().file_name().unwrap().to_str().unwrap().starts_with('.')) .filter(|e| !e.as_path().file_name().unwrap().to_str().unwrap().starts_with('.'))
.partition(|entry| entry.as_path().file_name().unwrap() == "_index.md"); .partition(|entry| {
entry.as_path().file_name().unwrap().to_str().unwrap().starts_with("_index.")
});
self.library = Library::new(page_entries.len(), section_entries.len()); self.library = Arc::new(RwLock::new(Library::new(
page_entries.len(),
section_entries.len(),
self.config.is_multilingual(),
)));
let sections = { let sections = {
let config = &self.config; let config = &self.config;
@ -195,7 +199,7 @@ impl Site {
.into_par_iter() .into_par_iter()
.map(|entry| { .map(|entry| {
let path = entry.as_path(); let path = entry.as_path();
Section::from_file(path, config) Section::from_file(path, config, &self.base_path)
}) })
.collect::<Vec<_>>() .collect::<Vec<_>>()
}; };
@ -207,7 +211,7 @@ impl Site {
.into_par_iter() .into_par_iter()
.map(|entry| { .map(|entry| {
let path = entry.as_path(); let path = entry.as_path();
Page::from_file(path, config) Page::from_file(path, config, &self.base_path)
}) })
.collect::<Vec<_>>() .collect::<Vec<_>>()
}; };
@ -219,10 +223,34 @@ impl Site {
self.add_section(s, false)?; self.add_section(s, false)?;
} }
// Insert a default index section if necessary so we don't need to create self.create_default_index_sections()?;
// a _index.md to render the index page at the root of the site
let index_path = self.index_section_path(); let mut pages_insert_anchors = HashMap::new();
if let Some(ref index_section) = self.library.get_section(&index_path) { for page in pages {
let p = page?;
pages_insert_anchors.insert(
p.file.path.clone(),
self.find_parent_section_insert_anchor(&p.file.parent.clone(), &p.lang),
);
self.add_page(p, false)?;
}
// taxonomy Tera fns are loaded in `register_early_global_fns`
// so we do need to populate it first.
self.populate_taxonomies()?;
self.register_early_global_fns();
self.populate_sections();
self.render_markdown()?;
self.register_tera_global_fns();
Ok(())
}
/// Insert a default index section for each language if necessary so we don't need to create
/// a _index.md to render the index page at the root of the site
pub fn create_default_index_sections(&mut self) -> Result<()> {
for (index_path, lang) in self.index_section_paths() {
if let Some(ref index_section) = self.library.read().unwrap().get_section(&index_path) {
if self.config.build_search_index && !index_section.meta.in_search_index { if self.config.build_search_index && !index_section.meta.in_search_index {
bail!( bail!(
"You have enabled search in the config but disabled it in the index section: \ "You have enabled search in the config but disabled it in the index section: \
@ -231,31 +259,29 @@ impl Site {
) )
} }
} }
let mut library = self.library.write().expect("Get lock for load");
// Not in else because of borrow checker // Not in else because of borrow checker
if !self.library.contains_section(&index_path) { if !library.contains_section(&index_path) {
let mut index_section = Section::default(); let mut index_section = Section::default();
index_section.file.parent = self.content_path.clone();
index_section.file.filename =
index_path.file_name().unwrap().to_string_lossy().to_string();
if let Some(ref l) = lang {
index_section.file.name = format!("_index.{}", l);
index_section.permalink = self.config.make_permalink(l);
let filename = format!("_index.{}.md", l);
index_section.file.path = self.content_path.join(&filename);
index_section.file.relative = filename;
index_section.lang = index_section.file.find_language(&self.config)?;
} else {
index_section.file.name = "_index".to_string();
index_section.permalink = self.config.make_permalink(""); index_section.permalink = self.config.make_permalink("");
index_section.file.path = self.content_path.join("_index.md"); index_section.file.path = self.content_path.join("_index.md");
index_section.file.parent = self.content_path.clone();
index_section.file.relative = "_index.md".to_string(); index_section.file.relative = "_index.md".to_string();
self.library.insert_section(index_section);
} }
library.insert_section(index_section);
let mut pages_insert_anchors = HashMap::new(); }
for page in pages {
let p = page?;
pages_insert_anchors.insert(
p.file.path.clone(),
self.find_parent_section_insert_anchor(&p.file.parent.clone()),
);
self.add_page(p, false)?;
} }
self.register_early_global_fns();
self.populate_sections();
self.render_markdown()?;
self.populate_taxonomies()?;
self.register_tera_global_fns();
Ok(()) Ok(())
} }
@ -271,14 +297,15 @@ impl Site {
// This is needed in the first place because of silly borrow checker // This is needed in the first place because of silly borrow checker
let mut pages_insert_anchors = HashMap::new(); let mut pages_insert_anchors = HashMap::new();
for (_, p) in self.library.pages() { for (_, p) in self.library.read().unwrap().pages() {
pages_insert_anchors.insert( pages_insert_anchors.insert(
p.file.path.clone(), p.file.path.clone(),
self.find_parent_section_insert_anchor(&p.file.parent.clone()), self.find_parent_section_insert_anchor(&p.file.parent.clone(), &p.lang),
); );
} }
self.library let mut library = self.library.write().expect("Get lock for render_markdown");
library
.pages_mut() .pages_mut()
.values_mut() .values_mut()
.collect::<Vec<_>>() .collect::<Vec<_>>()
@ -289,7 +316,7 @@ impl Site {
}) })
.collect::<Result<()>>()?; .collect::<Result<()>>()?;
self.library library
.sections_mut() .sections_mut()
.values_mut() .values_mut()
.collect::<Vec<_>>() .collect::<Vec<_>>()
@ -300,33 +327,37 @@ impl Site {
Ok(()) Ok(())
} }
/// Adds global fns that are to be available to shortcodes while rendering markdown /// Adds global fns that are to be available to shortcodes while
/// markdown
pub fn register_early_global_fns(&mut self) { pub fn register_early_global_fns(&mut self) {
self.tera.register_function( self.tera.register_function(
"get_url", "get_url",
global_fns::make_get_url(self.permalinks.clone(), self.config.clone()), global_fns::GetUrl::new(self.config.clone(), self.permalinks.clone()),
); );
self.tera.register_function( self.tera.register_function(
"resize_image", "resize_image",
global_fns::make_resize_image(self.imageproc.clone()), global_fns::ResizeImage::new(self.imageproc.clone()),
);
self.tera.register_function("load_data", global_fns::LoadData::new(self.base_path.clone()));
self.tera.register_function("trans", global_fns::Trans::new(self.config.clone()));
self.tera.register_function(
"get_taxonomy_url",
global_fns::GetTaxonomyUrl::new(&self.taxonomies),
); );
} }
pub fn register_tera_global_fns(&mut self) { pub fn register_tera_global_fns(&mut self) {
self.tera.register_function("trans", global_fns::make_trans(self.config.clone())); self.tera.register_function(
self.tera.register_function("get_page", global_fns::make_get_page(&self.library)); "get_page",
self.tera.register_function("get_section", global_fns::make_get_section(&self.library)); global_fns::GetPage::new(self.base_path.clone(), self.library.clone()),
);
self.tera.register_function(
"get_section",
global_fns::GetSection::new(self.base_path.clone(), self.library.clone()),
);
self.tera.register_function( self.tera.register_function(
"get_taxonomy", "get_taxonomy",
global_fns::make_get_taxonomy(&self.taxonomies, &self.library), global_fns::GetTaxonomy::new(self.taxonomies.clone(), self.library.clone()),
);
self.tera.register_function(
"get_taxonomy_url",
global_fns::make_get_taxonomy_url(&self.taxonomies),
);
self.tera.register_function(
"load_data",
global_fns::make_load_data(self.content_path.clone(), self.base_path.clone()),
); );
} }
@ -337,11 +368,13 @@ impl Site {
pub fn add_page(&mut self, mut page: Page, render: bool) -> Result<Option<Page>> { pub fn add_page(&mut self, mut page: Page, render: bool) -> Result<Option<Page>> {
self.permalinks.insert(page.file.relative.clone(), page.permalink.clone()); self.permalinks.insert(page.file.relative.clone(), page.permalink.clone());
if render { if render {
let insert_anchor = self.find_parent_section_insert_anchor(&page.file.parent); let insert_anchor =
self.find_parent_section_insert_anchor(&page.file.parent, &page.lang);
page.render_markdown(&self.permalinks, &self.tera, &self.config, insert_anchor)?; page.render_markdown(&self.permalinks, &self.tera, &self.config, insert_anchor)?;
} }
let prev = self.library.remove_page(&page.file.path); let mut library = self.library.write().expect("Get lock for add_page");
self.library.insert_page(page); let prev = library.remove_page(&page.file.path);
library.insert_page(page);
Ok(prev) Ok(prev)
} }
@ -355,16 +388,26 @@ impl Site {
if render { if render {
section.render_markdown(&self.permalinks, &self.tera, &self.config)?; section.render_markdown(&self.permalinks, &self.tera, &self.config)?;
} }
let prev = self.library.remove_section(&section.file.path); let mut library = self.library.write().expect("Get lock for add_section");
self.library.insert_section(section); let prev = library.remove_section(&section.file.path);
library.insert_section(section);
Ok(prev) Ok(prev)
} }
/// Finds the insert_anchor for the parent section of the directory at `path`. /// Finds the insert_anchor for the parent section of the directory at `path`.
/// Defaults to `AnchorInsert::None` if no parent section found /// Defaults to `AnchorInsert::None` if no parent section found
pub fn find_parent_section_insert_anchor(&self, parent_path: &PathBuf) -> InsertAnchor { pub fn find_parent_section_insert_anchor(
match self.library.get_section(&parent_path.join("_index.md")) { &self,
parent_path: &PathBuf,
lang: &str,
) -> InsertAnchor {
let parent = if lang != self.config.default_language {
parent_path.join(format!("_index.{}.md", lang))
} else {
parent_path.join("_index.md")
};
match self.library.read().unwrap().get_section(&parent) {
Some(s) => s.meta.insert_anchor_links, Some(s) => s.meta.insert_anchor_links,
None => InsertAnchor::None, None => InsertAnchor::None,
} }
@ -373,7 +416,8 @@ impl Site {
/// Find out the direct subsections of each subsection if there are some /// Find out the direct subsections of each subsection if there are some
/// as well as the pages for each section /// as well as the pages for each section
pub fn populate_sections(&mut self) { pub fn populate_sections(&mut self) {
self.library.populate_sections(); let mut library = self.library.write().expect("Get lock for populate_sections");
library.populate_sections(&self.config);
} }
/// Find all the tags and categories if it's asked in the config /// Find all the tags and categories if it's asked in the config
@ -382,7 +426,7 @@ impl Site {
return Ok(()); return Ok(());
} }
self.taxonomies = find_taxonomies(&self.config, &self.library)?; self.taxonomies = find_taxonomies(&self.config, &self.library.read().unwrap())?;
Ok(()) Ok(())
} }
@ -420,12 +464,13 @@ impl Site {
} }
pub fn num_img_ops(&self) -> usize { pub fn num_img_ops(&self) -> usize {
let imageproc = self.imageproc.lock().unwrap(); let imageproc = self.imageproc.lock().expect("Couldn't lock imageproc (num_img_ops)");
imageproc.num_img_ops() imageproc.num_img_ops()
} }
pub fn process_images(&self) -> Result<()> { pub fn process_images(&self) -> Result<()> {
let mut imageproc = self.imageproc.lock().unwrap(); let mut imageproc =
self.imageproc.lock().expect("Couldn't lock imageproc (process_images)");
imageproc.prune()?; imageproc.prune()?;
imageproc.do_process() imageproc.do_process()
} }
@ -434,7 +479,8 @@ impl Site {
pub fn clean(&self) -> Result<()> { pub fn clean(&self) -> Result<()> {
if self.output_path.exists() { if self.output_path.exists() {
// Delete current `public` directory so we can start fresh // Delete current `public` directory so we can start fresh
remove_dir_all(&self.output_path).chain_err(|| "Couldn't delete output directory")?; remove_dir_all(&self.output_path)
.map_err(|e| Error::chain("Couldn't delete output directory", e))?;
} }
Ok(()) Ok(())
@ -459,13 +505,17 @@ impl Site {
create_directory(&current_path)?; create_directory(&current_path)?;
// Finally, create a index.html file there with the page rendered // Finally, create a index.html file there with the page rendered
let output = page.render_html(&self.tera, &self.config, &self.library)?; let output = page.render_html(&self.tera, &self.config, &self.library.read().unwrap())?;
create_file(&current_path.join("index.html"), &self.inject_livereload(output))?; create_file(&current_path.join("index.html"), &self.inject_livereload(output))?;
// Copy any asset we found previously into the same directory as the index.html // Copy any asset we found previously into the same directory as the index.html
for asset in &page.assets { for asset in &page.assets {
let asset_path = asset.as_path(); let asset_path = asset.as_path();
copy(&asset_path, &current_path.join(asset_path.file_name().unwrap()))?; copy(
&asset_path,
&current_path
.join(asset_path.file_name().expect("Couldn't get filename from page asset")),
)?;
} }
Ok(()) Ok(())
@ -474,18 +524,8 @@ impl Site {
/// Deletes the `public` directory and builds the site /// Deletes the `public` directory and builds the site
pub fn build(&self) -> Result<()> { pub fn build(&self) -> Result<()> {
self.clean()?; self.clean()?;
// Render aliases first to allow overwriting
self.render_aliases()?;
self.render_sections()?;
self.render_orphan_pages()?;
self.render_sitemap()?;
if self.config.generate_rss {
self.render_rss_feed(self.library.pages_values(), None)?;
}
self.render_404()?;
self.render_robots()?;
self.render_taxonomies()?;
// Generate/move all assets before rendering any content
if let Some(ref theme) = self.config.theme { if let Some(ref theme) = self.config.theme {
let theme_path = self.base_path.join("themes").join(theme); let theme_path = self.base_path.join("themes").join(theme);
if theme_path.join("sass").exists() { if theme_path.join("sass").exists() {
@ -504,6 +544,40 @@ impl Site {
self.build_search_index()?; self.build_search_index()?;
} }
// Render aliases first to allow overwriting
self.render_aliases()?;
self.render_sections()?;
self.render_orphan_pages()?;
self.render_sitemap()?;
let library = self.library.read().unwrap();
if self.config.generate_rss {
let pages = if self.config.is_multilingual() {
library
.pages_values()
.iter()
.filter(|p| p.lang == self.config.default_language)
.map(|p| *p)
.collect()
} else {
library.pages_values()
};
self.render_rss_feed(pages, None)?;
}
for lang in &self.config.languages {
if !lang.rss {
continue;
}
let pages =
library.pages_values().iter().filter(|p| p.lang == lang.code).map(|p| *p).collect();
self.render_rss_feed(pages, Some(&PathBuf::from(lang.code.clone())))?;
}
self.render_404()?;
self.render_robots()?;
self.render_taxonomies()?;
Ok(()) Ok(())
} }
@ -513,7 +587,7 @@ impl Site {
&self.output_path.join(&format!("search_index.{}.js", self.config.default_language)), &self.output_path.join(&format!("search_index.{}.js", self.config.default_language)),
&format!( &format!(
"window.searchIndex = {};", "window.searchIndex = {};",
search::build_index(&self.config.default_language, &self.library)? search::build_index(&self.config.default_language, &self.library.read().unwrap())?
), ),
)?; )?;
@ -562,7 +636,7 @@ impl Site {
) -> Result<Vec<(PathBuf, PathBuf)>> { ) -> Result<Vec<(PathBuf, PathBuf)>> {
let glob_string = format!("{}/**/*.{}", sass_path.display(), extension); let glob_string = format!("{}/**/*.{}", sass_path.display(), extension);
let files = glob(&glob_string) let files = glob(&glob_string)
.unwrap() .expect("Invalid glob for sass")
.filter_map(|e| e.ok()) .filter_map(|e| e.ok())
.filter(|entry| { .filter(|entry| {
!entry.as_path().file_name().unwrap().to_string_lossy().starts_with('_') !entry.as_path().file_name().unwrap().to_string_lossy().starts_with('_')
@ -590,7 +664,7 @@ impl Site {
pub fn render_aliases(&self) -> Result<()> { pub fn render_aliases(&self) -> Result<()> {
ensure_directory_exists(&self.output_path)?; ensure_directory_exists(&self.output_path)?;
for (_, page) in self.library.pages() { for (_, page) in self.library.read().unwrap().pages() {
for alias in &page.meta.aliases { for alias in &page.meta.aliases {
let mut output_path = self.output_path.to_path_buf(); let mut output_path = self.output_path.to_path_buf();
let mut split = alias.split('/').collect::<Vec<_>>(); let mut split = alias.split('/').collect::<Vec<_>>();
@ -627,7 +701,7 @@ impl Site {
ensure_directory_exists(&self.output_path)?; ensure_directory_exists(&self.output_path)?;
let mut context = Context::new(); let mut context = Context::new();
context.insert("config", &self.config); context.insert("config", &self.config);
let output = render_template("404.html", &self.tera, &context, &self.config.theme)?; let output = render_template("404.html", &self.tera, context, &self.config.theme)?;
create_file(&self.output_path.join("404.html"), &self.inject_livereload(output)) create_file(&self.output_path.join("404.html"), &self.inject_livereload(output))
} }
@ -638,7 +712,7 @@ impl Site {
context.insert("config", &self.config); context.insert("config", &self.config);
create_file( create_file(
&self.output_path.join("robots.txt"), &self.output_path.join("robots.txt"),
&render_template("robots.txt", &self.tera, &context, &self.config.theme)?, &render_template("robots.txt", &self.tera, context, &self.config.theme)?,
) )
} }
@ -657,11 +731,18 @@ impl Site {
} }
ensure_directory_exists(&self.output_path)?; ensure_directory_exists(&self.output_path)?;
let output_path = self.output_path.join(&taxonomy.kind.name); let output_path = if taxonomy.kind.lang != self.config.default_language {
let list_output = taxonomy.render_all_terms(&self.tera, &self.config, &self.library)?; let mid_path = self.output_path.join(&taxonomy.kind.lang);
create_directory(&mid_path)?;
mid_path.join(&taxonomy.kind.name)
} else {
self.output_path.join(&taxonomy.kind.name)
};
let list_output =
taxonomy.render_all_terms(&self.tera, &self.config, &self.library.read().unwrap())?;
create_directory(&output_path)?; create_directory(&output_path)?;
create_file(&output_path.join("index.html"), &self.inject_livereload(list_output))?; create_file(&output_path.join("index.html"), &self.inject_livereload(list_output))?;
let library = self.library.read().unwrap();
taxonomy taxonomy
.items .items
.par_iter() .par_iter()
@ -670,18 +751,18 @@ impl Site {
if taxonomy.kind.is_paginated() { if taxonomy.kind.is_paginated() {
self.render_paginated( self.render_paginated(
&path, &path,
&Paginator::from_taxonomy(&taxonomy, item, &self.library), &Paginator::from_taxonomy(&taxonomy, item, &library),
)?; )?;
} else { } else {
let single_output = let single_output =
taxonomy.render_term(item, &self.tera, &self.config, &self.library)?; taxonomy.render_term(item, &self.tera, &self.config, &library)?;
create_directory(&path)?; create_directory(&path)?;
create_file(&path.join("index.html"), &self.inject_livereload(single_output))?; create_file(&path.join("index.html"), &self.inject_livereload(single_output))?;
} }
if taxonomy.kind.rss { if taxonomy.kind.rss {
self.render_rss_feed( self.render_rss_feed(
item.pages.iter().map(|p| self.library.get_page_by_key(*p)).collect(), item.pages.iter().map(|p| library.get_page_by_key(*p)).collect(),
Some(&PathBuf::from(format!("{}/{}", taxonomy.kind.name, item.slug))), Some(&PathBuf::from(format!("{}/{}", taxonomy.kind.name, item.slug))),
) )
} else { } else {
@ -695,82 +776,46 @@ impl Site {
pub fn render_sitemap(&self) -> Result<()> { pub fn render_sitemap(&self) -> Result<()> {
ensure_directory_exists(&self.output_path)?; ensure_directory_exists(&self.output_path)?;
let library = self.library.read().unwrap();
let all_sitemap_entries = sitemap::find_entries(
&library,
&self.taxonomies[..],
&self.config,
);
let sitemap_limit = 30000;
if all_sitemap_entries.len() < sitemap_limit {
// Create single sitemap
let mut context = Context::new(); let mut context = Context::new();
context.insert("entries", &all_sitemap_entries);
let sitemap = &render_template("sitemap.xml", &self.tera, context, &self.config.theme)?;
create_file(&self.output_path.join("sitemap.xml"), sitemap)?;
return Ok(());
}
let mut pages = self // Create multiple sitemaps (max 30000 urls each)
.library let mut sitemap_index = Vec::new();
.pages_values() for (i, chunk) in
.iter() all_sitemap_entries.iter().collect::<Vec<_>>().chunks(sitemap_limit).enumerate()
.filter(|p| !p.is_draft())
.map(|p| {
let date = match p.meta.date {
Some(ref d) => Some(d.to_string()),
None => None,
};
SitemapEntry::new(p.permalink.clone(), date)
})
.collect::<Vec<_>>();
pages.sort_by(|a, b| a.permalink.cmp(&b.permalink));
context.insert("pages", &pages);
let mut sections = self
.library
.sections_values()
.iter()
.map(|s| SitemapEntry::new(s.permalink.clone(), None))
.collect::<Vec<_>>();
for section in
self.library.sections_values().iter().filter(|s| s.meta.paginate_by.is_some())
{ {
let number_pagers = (section.pages.len() as f64 let mut context = Context::new();
/ section.meta.paginate_by.unwrap() as f64) context.insert("entries", &chunk);
.ceil() as isize; let sitemap = &render_template("sitemap.xml", &self.tera, context, &self.config.theme)?;
for i in 1..number_pagers + 1 { let file_name = format!("sitemap{}.xml", i + 1);
let permalink = create_file(&self.output_path.join(&file_name), sitemap)?;
format!("{}{}/{}/", section.permalink, section.meta.paginate_path, i); let mut sitemap_url: String = self.config.make_permalink(&file_name);
sections.push(SitemapEntry::new(permalink, None)) sitemap_url.pop(); // Remove trailing slash
sitemap_index.push(sitemap_url);
} }
} // Create main sitemap that reference numbered sitemaps
sections.sort_by(|a, b| a.permalink.cmp(&b.permalink)); let mut main_context = Context::new();
context.insert("sections", &sections); main_context.insert("sitemaps", &sitemap_index);
let sitemap = &render_template(
let mut taxonomies = vec![]; "split_sitemap_index.xml",
for taxonomy in &self.taxonomies { &self.tera,
let name = &taxonomy.kind.name; main_context,
let mut terms = vec![]; &self.config.theme,
terms.push(SitemapEntry::new(self.config.make_permalink(name), None)); )?;
for item in &taxonomy.items {
terms.push(SitemapEntry::new(
self.config.make_permalink(&format!("{}/{}", &name, item.slug)),
None,
));
if taxonomy.kind.is_paginated() {
let number_pagers = (item.pages.len() as f64
/ taxonomy.kind.paginate_by.unwrap() as f64)
.ceil() as isize;
for i in 1..number_pagers + 1 {
let permalink = self.config.make_permalink(&format!(
"{}/{}/{}/{}",
name,
item.slug,
taxonomy.kind.paginate_path(),
i
));
terms.push(SitemapEntry::new(permalink, None))
}
}
}
terms.sort_by(|a, b| a.permalink.cmp(&b.permalink));
taxonomies.push(terms);
}
context.insert("taxonomies", &taxonomies);
context.insert("config", &self.config);
let sitemap = &render_template("sitemap.xml", &self.tera, &context, &self.config.theme)?;
create_file(&self.output_path.join("sitemap.xml"), sitemap)?; create_file(&self.output_path.join("sitemap.xml"), sitemap)?;
Ok(()) Ok(())
@ -800,12 +845,13 @@ impl Site {
pages.par_sort_unstable_by(sort_actual_pages_by_date); pages.par_sort_unstable_by(sort_actual_pages_by_date);
context.insert("last_build_date", &pages[0].meta.date.clone()); context.insert("last_build_date", &pages[0].meta.date.clone());
let library = self.library.read().unwrap();
// limit to the last n elements if the limit is set; otherwise use all. // limit to the last n elements if the limit is set; otherwise use all.
let num_entries = self.config.rss_limit.unwrap_or(pages.len()); let num_entries = self.config.rss_limit.unwrap_or_else(|| pages.len());
let p = pages let p = pages
.iter() .iter()
.take(num_entries) .take(num_entries)
.map(|x| x.to_serialized_basic(&self.library)) .map(|x| x.to_serialized_basic(&library))
.collect::<Vec<_>>(); .collect::<Vec<_>>();
context.insert("pages", &p); context.insert("pages", &p);
@ -819,7 +865,7 @@ impl Site {
context.insert("feed_url", &rss_feed_url); context.insert("feed_url", &rss_feed_url);
let feed = &render_template("rss.xml", &self.tera, &context, &self.config.theme)?; let feed = &render_template("rss.xml", &self.tera, context, &self.config.theme)?;
if let Some(ref base) = base_path { if let Some(ref base) = base_path {
let mut output_path = self.output_path.clone(); let mut output_path = self.output_path.clone();
@ -840,6 +886,14 @@ impl Site {
pub fn render_section(&self, section: &Section, render_pages: bool) -> Result<()> { pub fn render_section(&self, section: &Section, render_pages: bool) -> Result<()> {
ensure_directory_exists(&self.output_path)?; ensure_directory_exists(&self.output_path)?;
let mut output_path = self.output_path.clone(); let mut output_path = self.output_path.clone();
if section.lang != self.config.default_language {
output_path.push(&section.lang);
if !output_path.exists() {
create_directory(&output_path)?;
}
}
for component in &section.file.components { for component in &section.file.components {
output_path.push(component); output_path.push(component);
@ -851,14 +905,19 @@ impl Site {
// Copy any asset we found previously into the same directory as the index.html // Copy any asset we found previously into the same directory as the index.html
for asset in &section.assets { for asset in &section.assets {
let asset_path = asset.as_path(); let asset_path = asset.as_path();
copy(&asset_path, &output_path.join(asset_path.file_name().unwrap()))?; copy(
&asset_path,
&output_path.join(
asset_path.file_name().expect("Failed to get asset filename for section"),
),
)?;
} }
if render_pages { if render_pages {
section section
.pages .pages
.par_iter() .par_iter()
.map(|k| self.render_page(self.library.get_page_by_key(*k))) .map(|k| self.render_page(self.library.read().unwrap().get_page_by_key(*k)))
.collect::<Result<()>>()?; .collect::<Result<()>>()?;
} }
@ -876,9 +935,13 @@ impl Site {
} }
if section.meta.is_paginated() { if section.meta.is_paginated() {
self.render_paginated(&output_path, &Paginator::from_section(&section, &self.library))?; self.render_paginated(
&output_path,
&Paginator::from_section(&section, &self.library.read().unwrap()),
)?;
} else { } else {
let output = section.render_html(&self.tera, &self.config, &self.library)?; let output =
section.render_html(&self.tera, &self.config, &self.library.read().unwrap())?;
create_file(&output_path.join("index.html"), &self.inject_livereload(output))?; create_file(&output_path.join("index.html"), &self.inject_livereload(output))?;
} }
@ -888,7 +951,12 @@ impl Site {
/// Used only on reload /// Used only on reload
pub fn render_index(&self) -> Result<()> { pub fn render_index(&self) -> Result<()> {
self.render_section( self.render_section(
&self.library.get_section(&self.content_path.join("_index.md")).unwrap(), &self
.library
.read()
.unwrap()
.get_section(&self.content_path.join("_index.md"))
.expect("Failed to get index section"),
false, false,
) )
} }
@ -896,6 +964,8 @@ impl Site {
/// Renders all sections /// Renders all sections
pub fn render_sections(&self) -> Result<()> { pub fn render_sections(&self) -> Result<()> {
self.library self.library
.read()
.unwrap()
.sections_values() .sections_values()
.into_par_iter() .into_par_iter()
.map(|s| self.render_section(s, true)) .map(|s| self.render_section(s, true))
@ -905,8 +975,8 @@ impl Site {
/// Renders all pages that do not belong to any sections /// Renders all pages that do not belong to any sections
pub fn render_orphan_pages(&self) -> Result<()> { pub fn render_orphan_pages(&self) -> Result<()> {
ensure_directory_exists(&self.output_path)?; ensure_directory_exists(&self.output_path)?;
let library = self.library.read().unwrap();
for page in self.get_all_orphan_pages() { for page in library.get_all_orphan_pages() {
self.render_page(page)?; self.render_page(page)?;
} }
@ -926,8 +996,12 @@ impl Site {
.map(|pager| { .map(|pager| {
let page_path = folder_path.join(&format!("{}", pager.index)); let page_path = folder_path.join(&format!("{}", pager.index));
create_directory(&page_path)?; create_directory(&page_path)?;
let output = let output = paginator.render_pager(
paginator.render_pager(pager, &self.config, &self.tera, &self.library)?; pager,
&self.config,
&self.tera,
&self.library.read().unwrap(),
)?;
if pager.index > 1 { if pager.index > 1 {
create_file(&page_path.join("index.html"), &self.inject_livereload(output))?; create_file(&page_path.join("index.html"), &self.inject_livereload(output))?;
} else { } else {

View file

@ -0,0 +1,127 @@
use std::borrow::Cow;
use std::hash::{Hash, Hasher};
use std::collections::{HashSet};
use tera::{Map, Value};
use config::{Config};
use library::{Library, Taxonomy};
/// The sitemap only needs links, potentially date and extra for pages in case of updates
/// for examples so we trim down all entries to only that
#[derive(Debug, Serialize)]
pub struct SitemapEntry<'a> {
permalink: Cow<'a, str>,
date: Option<String>,
extra: Option<&'a Map<String, Value>>,
}
// Hash/Eq is not implemented for tera::Map but in our case we only care about the permalink
// when comparing/hashing so we implement it manually
impl<'a> Hash for SitemapEntry<'a> {
fn hash<H: Hasher>(&self, state: &mut H) {
self.permalink.hash(state);
}
}
impl<'a> PartialEq for SitemapEntry<'a> {
fn eq(&self, other: &SitemapEntry) -> bool {
self.permalink == other.permalink
}
}
impl<'a> Eq for SitemapEntry<'a> {}
impl<'a> SitemapEntry<'a> {
pub fn new(permalink: Cow<'a, str>, date: Option<String>) -> Self {
SitemapEntry { permalink, date, extra: None }
}
pub fn add_extra(&mut self, extra: &'a Map<String, Value>) {
self.extra = Some(extra);
}
}
/// Finds out all the links to put in a sitemap from the pages/sections/taxonomies
/// There are no duplicate permalinks in the output vec
pub fn find_entries<'a>(library: &'a Library, taxonomies: &'a [Taxonomy], config: &'a Config) -> Vec<SitemapEntry<'a>> {
let pages = library
.pages_values()
.iter()
.filter(|p| !p.is_draft())
.map(|p| {
let date = match p.meta.date {
Some(ref d) => Some(d.to_string()),
None => None,
};
let mut entry = SitemapEntry::new(Cow::Borrowed(&p.permalink), date);
entry.add_extra(&p.meta.extra);
entry
})
.collect::<Vec<_>>();
let mut sections = library
.sections_values()
.iter()
.filter(|s| s.meta.render)
.map(|s| SitemapEntry::new(Cow::Borrowed(&s.permalink), None))
.collect::<Vec<_>>();
for section in library
.sections_values()
.iter()
.filter(|s| s.meta.paginate_by.is_some())
{
let number_pagers = (section.pages.len() as f64
/ section.meta.paginate_by.unwrap() as f64)
.ceil() as isize;
for i in 1..=number_pagers {
let permalink =
format!("{}{}/{}/", section.permalink, section.meta.paginate_path, i);
sections.push(SitemapEntry::new(Cow::Owned(permalink), None))
}
}
let mut taxonomies_entries = vec![];
for taxonomy in taxonomies {
let name = &taxonomy.kind.name;
let mut terms = vec![];
terms.push(SitemapEntry::new(Cow::Owned(config.make_permalink(name)), None));
for item in &taxonomy.items {
terms.push(SitemapEntry::new(
Cow::Owned(config.make_permalink(&format!("{}/{}", name, item.slug))),
None,
));
if taxonomy.kind.is_paginated() {
let number_pagers = (item.pages.len() as f64
/ taxonomy.kind.paginate_by.unwrap() as f64)
.ceil() as isize;
for i in 1..=number_pagers {
let permalink = config.make_permalink(&format!(
"{}/{}/{}/{}",
name,
item.slug,
taxonomy.kind.paginate_path(),
i
));
terms.push(SitemapEntry::new(Cow::Owned(permalink), None))
}
}
}
taxonomies_entries.push(terms);
}
let mut all_sitemap_entries = HashSet::new();
for p in pages {
all_sitemap_entries.insert(p);
}
for s in sections {
all_sitemap_entries.insert(s);
}
for terms in taxonomies_entries {
for term in terms {
all_sitemap_entries.insert(term);
}
}
all_sitemap_entries.into_iter().collect::<Vec<_>>()
}

View file

@ -0,0 +1,69 @@
extern crate site;
extern crate tempfile;
use std::env;
use std::path::PathBuf;
use self::site::Site;
use self::tempfile::{tempdir, TempDir};
// 2 helper macros to make all the build testing more bearable
#[macro_export]
macro_rules! file_exists {
($root: expr, $path: expr) => {{
let mut path = $root.clone();
for component in $path.split("/") {
path = path.join(component);
}
std::path::Path::new(&path).exists()
}};
}
#[macro_export]
macro_rules! file_contains {
($root: expr, $path: expr, $text: expr) => {{
use std::io::prelude::*;
let mut path = $root.clone();
for component in $path.split("/") {
path = path.join(component);
}
let mut file = std::fs::File::open(&path).expect(&format!("Failed to open {:?}", $path));
let mut s = String::new();
file.read_to_string(&mut s).unwrap();
println!("{}", s);
s.contains($text)
}};
}
/// We return the tmpdir otherwise it would get out of scope and be deleted
/// The tests can ignore it if they dont need it by prefixing it with a `_`
pub fn build_site(name: &str) -> (Site, TempDir, PathBuf) {
let mut path = env::current_dir().unwrap().parent().unwrap().parent().unwrap().to_path_buf();
path.push(name);
let mut site = Site::new(&path, "config.toml").unwrap();
site.load().unwrap();
let tmp_dir = tempdir().expect("create temp dir");
let public = &tmp_dir.path().join("public");
site.set_output_path(&public);
site.build().expect("Couldn't build the site");
(site, tmp_dir, public.clone())
}
/// Same as `build_site` but has a hook to setup some config options
pub fn build_site_with_setup<F>(name: &str, mut setup_cb: F) -> (Site, TempDir, PathBuf)
where
F: FnMut(Site) -> (Site, bool),
{
let mut path = env::current_dir().unwrap().parent().unwrap().parent().unwrap().to_path_buf();
path.push(name);
let site = Site::new(&path, "config.toml").unwrap();
let (mut site, needs_loading) = setup_cb(site);
if needs_loading {
site.load().unwrap();
}
let tmp_dir = tempdir().expect("create temp dir");
let public = &tmp_dir.path().join("public");
site.set_output_path(&public);
site.build().expect("Couldn't build the site");
(site, tmp_dir, public.clone())
}

View file

@ -1,16 +1,14 @@
extern crate config; extern crate config;
extern crate site; extern crate site;
extern crate tempfile; mod common;
use std::collections::HashMap; use std::collections::HashMap;
use std::env; use std::env;
use std::fs::File;
use std::io::prelude::*;
use std::path::Path; use std::path::Path;
use common::{build_site, build_site_with_setup};
use config::Taxonomy; use config::Taxonomy;
use site::Site; use site::Site;
use tempfile::tempdir;
#[test] #[test]
fn can_parse_site() { fn can_parse_site() {
@ -18,59 +16,59 @@ fn can_parse_site() {
path.push("test_site"); path.push("test_site");
let mut site = Site::new(&path, "config.toml").unwrap(); let mut site = Site::new(&path, "config.toml").unwrap();
site.load().unwrap(); site.load().unwrap();
let library = site.library.read().unwrap();
// Correct number of pages (sections do not count as pages) // Correct number of pages (sections do not count as pages)
assert_eq!(site.library.pages().len(), 22); assert_eq!(library.pages().len(), 22);
let posts_path = path.join("content").join("posts"); let posts_path = path.join("content").join("posts");
// Make sure the page with a url doesn't have any sections // Make sure the page with a url doesn't have any sections
let url_post = site.library.get_page(&posts_path.join("fixed-url.md")).unwrap(); let url_post = library.get_page(&posts_path.join("fixed-url.md")).unwrap();
assert_eq!(url_post.path, "a-fixed-url/"); assert_eq!(url_post.path, "a-fixed-url/");
// Make sure the article in a folder with only asset doesn't get counted as a section // Make sure the article in a folder with only asset doesn't get counted as a section
let asset_folder_post = let asset_folder_post =
site.library.get_page(&posts_path.join("with-assets").join("index.md")).unwrap(); library.get_page(&posts_path.join("with-assets").join("index.md")).unwrap();
assert_eq!(asset_folder_post.file.components, vec!["posts".to_string()]); assert_eq!(asset_folder_post.file.components, vec!["posts".to_string()]);
// That we have the right number of sections // That we have the right number of sections
assert_eq!(site.library.sections().len(), 11); assert_eq!(library.sections().len(), 11);
// And that the sections are correct // And that the sections are correct
let index_section = site.library.get_section(&path.join("content").join("_index.md")).unwrap(); let index_section = library.get_section(&path.join("content").join("_index.md")).unwrap();
assert_eq!(index_section.subsections.len(), 4); assert_eq!(index_section.subsections.len(), 4);
assert_eq!(index_section.pages.len(), 1); assert_eq!(index_section.pages.len(), 1);
assert!(index_section.ancestors.is_empty()); assert!(index_section.ancestors.is_empty());
let posts_section = site.library.get_section(&posts_path.join("_index.md")).unwrap(); let posts_section = library.get_section(&posts_path.join("_index.md")).unwrap();
assert_eq!(posts_section.subsections.len(), 2); assert_eq!(posts_section.subsections.len(), 2);
assert_eq!(posts_section.pages.len(), 10); assert_eq!(posts_section.pages.len(), 10);
assert_eq!( assert_eq!(
posts_section.ancestors, posts_section.ancestors,
vec![*site.library.get_section_key(&index_section.file.path).unwrap()] vec![*library.get_section_key(&index_section.file.path).unwrap()]
); );
// Make sure we remove all the pwd + content from the sections // Make sure we remove all the pwd + content from the sections
let basic = site.library.get_page(&posts_path.join("simple.md")).unwrap(); let basic = library.get_page(&posts_path.join("simple.md")).unwrap();
assert_eq!(basic.file.components, vec!["posts".to_string()]); assert_eq!(basic.file.components, vec!["posts".to_string()]);
assert_eq!( assert_eq!(
basic.ancestors, basic.ancestors,
vec![ vec![
*site.library.get_section_key(&index_section.file.path).unwrap(), *library.get_section_key(&index_section.file.path).unwrap(),
*site.library.get_section_key(&posts_section.file.path).unwrap(), *library.get_section_key(&posts_section.file.path).unwrap(),
] ]
); );
let tutorials_section = let tutorials_section =
site.library.get_section(&posts_path.join("tutorials").join("_index.md")).unwrap(); library.get_section(&posts_path.join("tutorials").join("_index.md")).unwrap();
assert_eq!(tutorials_section.subsections.len(), 2); assert_eq!(tutorials_section.subsections.len(), 2);
let sub1 = site.library.get_section_by_key(tutorials_section.subsections[0]); let sub1 = library.get_section_by_key(tutorials_section.subsections[0]);
let sub2 = site.library.get_section_by_key(tutorials_section.subsections[1]); let sub2 = library.get_section_by_key(tutorials_section.subsections[1]);
assert_eq!(sub1.clone().meta.title.unwrap(), "Programming"); assert_eq!(sub1.clone().meta.title.unwrap(), "Programming");
assert_eq!(sub2.clone().meta.title.unwrap(), "DevOps"); assert_eq!(sub2.clone().meta.title.unwrap(), "DevOps");
assert_eq!(tutorials_section.pages.len(), 0); assert_eq!(tutorials_section.pages.len(), 0);
let devops_section = site let devops_section = library
.library
.get_section(&posts_path.join("tutorials").join("devops").join("_index.md")) .get_section(&posts_path.join("tutorials").join("devops").join("_index.md"))
.unwrap(); .unwrap();
assert_eq!(devops_section.subsections.len(), 0); assert_eq!(devops_section.subsections.len(), 0);
@ -78,55 +76,22 @@ fn can_parse_site() {
assert_eq!( assert_eq!(
devops_section.ancestors, devops_section.ancestors,
vec![ vec![
*site.library.get_section_key(&index_section.file.path).unwrap(), *library.get_section_key(&index_section.file.path).unwrap(),
*site.library.get_section_key(&posts_section.file.path).unwrap(), *library.get_section_key(&posts_section.file.path).unwrap(),
*site.library.get_section_key(&tutorials_section.file.path).unwrap(), *library.get_section_key(&tutorials_section.file.path).unwrap(),
] ]
); );
let prog_section = site let prog_section = library
.library
.get_section(&posts_path.join("tutorials").join("programming").join("_index.md")) .get_section(&posts_path.join("tutorials").join("programming").join("_index.md"))
.unwrap(); .unwrap();
assert_eq!(prog_section.subsections.len(), 0); assert_eq!(prog_section.subsections.len(), 0);
assert_eq!(prog_section.pages.len(), 2); assert_eq!(prog_section.pages.len(), 2);
} }
// 2 helper macros to make all the build testing more bearable
macro_rules! file_exists {
($root: expr, $path: expr) => {{
let mut path = $root.clone();
for component in $path.split("/") {
path = path.join(component);
}
Path::new(&path).exists()
}};
}
macro_rules! file_contains {
($root: expr, $path: expr, $text: expr) => {{
let mut path = $root.clone();
for component in $path.split("/") {
path = path.join(component);
}
let mut file = File::open(&path).unwrap();
let mut s = String::new();
file.read_to_string(&mut s).unwrap();
println!("{}", s);
s.contains($text)
}};
}
#[test] #[test]
fn can_build_site_without_live_reload() { fn can_build_site_without_live_reload() {
let mut path = env::current_dir().unwrap().parent().unwrap().parent().unwrap().to_path_buf(); let (_, _tmp_dir, public) = build_site("test_site");
path.push("test_site");
let mut site = Site::new(&path, "config.toml").unwrap();
site.load().unwrap();
let tmp_dir = tempdir().expect("create temp dir");
let public = &tmp_dir.path().join("public");
site.set_output_path(&public);
site.build().unwrap();
assert!(&public.exists()); assert!(&public.exists());
assert!(file_exists!(public, "index.html")); assert!(file_exists!(public, "index.html"));
@ -210,6 +175,8 @@ fn can_build_site_without_live_reload() {
)); ));
// Drafts are not in the sitemap // Drafts are not in the sitemap
assert!(!file_contains!(public, "sitemap.xml", "draft")); assert!(!file_contains!(public, "sitemap.xml", "draft"));
// render: false sections are not in the sitemap either
assert!(!file_contains!(public, "sitemap.xml", "posts/2018/</loc>"));
// robots.txt has been rendered from the template // robots.txt has been rendered from the template
assert!(file_contains!(public, "robots.txt", "User-agent: zola")); assert!(file_contains!(public, "robots.txt", "User-agent: zola"));
@ -222,17 +189,12 @@ fn can_build_site_without_live_reload() {
#[test] #[test]
fn can_build_site_with_live_reload() { fn can_build_site_with_live_reload() {
let mut path = env::current_dir().unwrap().parent().unwrap().parent().unwrap().to_path_buf(); let (_, _tmp_dir, public) = build_site_with_setup("test_site", |mut site| {
path.push("test_site");
let mut site = Site::new(&path, "config.toml").unwrap();
site.load().unwrap();
let tmp_dir = tempdir().expect("create temp dir");
let public = &tmp_dir.path().join("public");
site.set_output_path(&public);
site.enable_live_reload(1000); site.enable_live_reload(1000);
site.build().unwrap(); (site, true)
});
assert!(Path::new(&public).exists()); assert!(&public.exists());
assert!(file_exists!(public, "index.html")); assert!(file_exists!(public, "index.html"));
assert!(file_exists!(public, "sitemap.xml")); assert!(file_exists!(public, "sitemap.xml"));
@ -271,12 +233,11 @@ fn can_build_site_with_live_reload() {
#[test] #[test]
fn can_build_site_with_taxonomies() { fn can_build_site_with_taxonomies() {
let mut path = env::current_dir().unwrap().parent().unwrap().parent().unwrap().to_path_buf(); let (site, _tmp_dir, public) = build_site_with_setup("test_site", |mut site| {
path.push("test_site");
let mut site = Site::new(&path, "config.toml").unwrap();
site.load().unwrap(); site.load().unwrap();
{
for (i, (_, page)) in site.library.pages_mut().iter_mut().enumerate() { let mut library = site.library.write().unwrap();
for (i, (_, page)) in library.pages_mut().iter_mut().enumerate() {
page.meta.taxonomies = { page.meta.taxonomies = {
let mut taxonomies = HashMap::new(); let mut taxonomies = HashMap::new();
taxonomies.insert( taxonomies.insert(
@ -286,13 +247,12 @@ fn can_build_site_with_taxonomies() {
taxonomies taxonomies
}; };
} }
}
site.populate_taxonomies().unwrap(); site.populate_taxonomies().unwrap();
let tmp_dir = tempdir().expect("create temp dir"); (site, false)
let public = &tmp_dir.path().join("public"); });
site.set_output_path(&public);
site.build().unwrap();
assert!(Path::new(&public).exists()); assert!(&public.exists());
assert_eq!(site.taxonomies.len(), 1); assert_eq!(site.taxonomies.len(), 1);
assert!(file_exists!(public, "index.html")); assert!(file_exists!(public, "index.html"));
@ -340,15 +300,7 @@ fn can_build_site_with_taxonomies() {
#[test] #[test]
fn can_build_site_and_insert_anchor_links() { fn can_build_site_and_insert_anchor_links() {
let mut path = env::current_dir().unwrap().parent().unwrap().parent().unwrap().to_path_buf(); let (_, _tmp_dir, public) = build_site("test_site");
path.push("test_site");
let mut site = Site::new(&path, "config.toml").unwrap();
site.load().unwrap();
let tmp_dir = tempdir().expect("create temp dir");
let public = &tmp_dir.path().join("public");
site.set_output_path(&public);
site.build().unwrap();
assert!(Path::new(&public).exists()); assert!(Path::new(&public).exists());
// anchor link inserted // anchor link inserted
@ -361,23 +313,22 @@ fn can_build_site_and_insert_anchor_links() {
#[test] #[test]
fn can_build_site_with_pagination_for_section() { fn can_build_site_with_pagination_for_section() {
let mut path = env::current_dir().unwrap().parent().unwrap().parent().unwrap().to_path_buf(); let (_, _tmp_dir, public) = build_site_with_setup("test_site", |mut site| {
path.push("test_site");
let mut site = Site::new(&path, "config.toml").unwrap();
site.load().unwrap(); site.load().unwrap();
for (_, section) in site.library.sections_mut() { {
let mut library = site.library.write().unwrap();
for (_, section) in library.sections_mut() {
if section.is_index() { if section.is_index() {
continue; continue;
} }
section.meta.paginate_by = Some(2); section.meta.paginate_by = Some(2);
section.meta.template = Some("section_paginated.html".to_string()); section.meta.template = Some("section_paginated.html".to_string());
} }
let tmp_dir = tempdir().expect("create temp dir"); }
let public = &tmp_dir.path().join("public"); (site, false)
site.set_output_path(&public); });
site.build().unwrap();
assert!(Path::new(&public).exists()); assert!(&public.exists());
assert!(file_exists!(public, "index.html")); assert!(file_exists!(public, "index.html"));
assert!(file_exists!(public, "sitemap.xml")); assert!(file_exists!(public, "sitemap.xml"));
@ -478,21 +429,22 @@ fn can_build_site_with_pagination_for_section() {
#[test] #[test]
fn can_build_site_with_pagination_for_index() { fn can_build_site_with_pagination_for_index() {
let mut path = env::current_dir().unwrap().parent().unwrap().parent().unwrap().to_path_buf(); let (_, _tmp_dir, public) = build_site_with_setup("test_site", |mut site| {
path.push("test_site");
let mut site = Site::new(&path, "config.toml").unwrap();
site.load().unwrap(); site.load().unwrap();
{ {
let index = site.library.get_section_mut(&path.join("content").join("_index.md")).unwrap(); let mut library = site.library.write().unwrap();
{
let index = library
.get_section_mut(&site.base_path.join("content").join("_index.md"))
.unwrap();
index.meta.paginate_by = Some(2); index.meta.paginate_by = Some(2);
index.meta.template = Some("index_paginated.html".to_string()); index.meta.template = Some("index_paginated.html".to_string());
} }
let tmp_dir = tempdir().expect("create temp dir"); }
let public = &tmp_dir.path().join("public"); (site, false)
site.set_output_path(&public); });
site.build().unwrap();
assert!(Path::new(&public).exists()); assert!(&public.exists());
assert!(file_exists!(public, "index.html")); assert!(file_exists!(public, "index.html"));
assert!(file_exists!(public, "sitemap.xml")); assert!(file_exists!(public, "sitemap.xml"));
@ -530,33 +482,34 @@ fn can_build_site_with_pagination_for_index() {
#[test] #[test]
fn can_build_site_with_pagination_for_taxonomy() { fn can_build_site_with_pagination_for_taxonomy() {
let mut path = env::current_dir().unwrap().parent().unwrap().parent().unwrap().to_path_buf(); let (_, _tmp_dir, public) = build_site_with_setup("test_site", |mut site| {
path.push("test_site");
let mut site = Site::new(&path, "config.toml").unwrap();
site.config.taxonomies.push(Taxonomy { site.config.taxonomies.push(Taxonomy {
name: "tags".to_string(), name: "tags".to_string(),
paginate_by: Some(2), paginate_by: Some(2),
paginate_path: None, paginate_path: None,
rss: true, rss: true,
lang: site.config.default_language.clone(),
}); });
site.load().unwrap(); site.load().unwrap();
{
let mut library = site.library.write().unwrap();
for (i, (_, page)) in site.library.pages_mut().iter_mut().enumerate() { for (i, (_, page)) in library.pages_mut().iter_mut().enumerate() {
page.meta.taxonomies = { page.meta.taxonomies = {
let mut taxonomies = HashMap::new(); let mut taxonomies = HashMap::new();
taxonomies taxonomies.insert(
.insert("tags".to_string(), vec![if i % 2 == 0 { "A" } else { "B" }.to_string()]); "tags".to_string(),
vec![if i % 2 == 0 { "A" } else { "B" }.to_string()],
);
taxonomies taxonomies
}; };
} }
}
site.populate_taxonomies().unwrap(); site.populate_taxonomies().unwrap();
(site, false)
});
let tmp_dir = tempdir().expect("create temp dir"); assert!(&public.exists());
let public = &tmp_dir.path().join("public");
site.set_output_path(&public);
site.build().unwrap();
assert!(Path::new(&public).exists());
assert!(file_exists!(public, "index.html")); assert!(file_exists!(public, "index.html"));
assert!(file_exists!(public, "sitemap.xml")); assert!(file_exists!(public, "sitemap.xml"));
@ -610,16 +563,9 @@ fn can_build_site_with_pagination_for_taxonomy() {
#[test] #[test]
fn can_build_rss_feed() { fn can_build_rss_feed() {
let mut path = env::current_dir().unwrap().parent().unwrap().parent().unwrap().to_path_buf(); let (_, _tmp_dir, public) = build_site("test_site");
path.push("test_site");
let mut site = Site::new(&path, "config.toml").unwrap();
site.load().unwrap();
let tmp_dir = tempdir().expect("create temp dir");
let public = &tmp_dir.path().join("public");
site.set_output_path(&public);
site.build().unwrap();
assert!(Path::new(&public).exists()); assert!(&public.exists());
assert!(file_exists!(public, "rss.xml")); assert!(file_exists!(public, "rss.xml"));
// latest article is posts/extra-syntax.md // latest article is posts/extra-syntax.md
assert!(file_contains!(public, "rss.xml", "Extra Syntax")); assert!(file_contains!(public, "rss.xml", "Extra Syntax"));
@ -629,15 +575,10 @@ fn can_build_rss_feed() {
#[test] #[test]
fn can_build_search_index() { fn can_build_search_index() {
let mut path = env::current_dir().unwrap().parent().unwrap().parent().unwrap().to_path_buf(); let (_, _tmp_dir, public) = build_site_with_setup("test_site", |mut site| {
path.push("test_site");
let mut site = Site::new(&path, "config.toml").unwrap();
site.load().unwrap();
site.config.build_search_index = true; site.config.build_search_index = true;
let tmp_dir = tempdir().expect("create temp dir"); (site, true)
let public = &tmp_dir.path().join("public"); });
site.set_output_path(&public);
site.build().unwrap();
assert!(Path::new(&public).exists()); assert!(Path::new(&public).exists());
assert!(file_exists!(public, "elasticlunr.min.js")); assert!(file_exists!(public, "elasticlunr.min.js"));
@ -646,14 +587,7 @@ fn can_build_search_index() {
#[test] #[test]
fn can_build_with_extra_syntaxes() { fn can_build_with_extra_syntaxes() {
let mut path = env::current_dir().unwrap().parent().unwrap().parent().unwrap().to_path_buf(); let (_, _tmp_dir, public) = build_site("test_site");
path.push("test_site");
let mut site = Site::new(&path, "config.toml").unwrap();
site.load().unwrap();
let tmp_dir = tempdir().expect("create temp dir");
let public = &tmp_dir.path().join("public");
site.set_output_path(&public);
site.build().unwrap();
assert!(&public.exists()); assert!(&public.exists());
assert!(file_exists!(public, "posts/extra-syntax/index.html")); assert!(file_exists!(public, "posts/extra-syntax/index.html"));
@ -672,38 +606,48 @@ fn can_apply_page_templates() {
site.load().unwrap(); site.load().unwrap();
let template_path = path.join("content").join("applying_page_template"); let template_path = path.join("content").join("applying_page_template");
let library = site.library.read().unwrap();
let template_section = site.library.get_section(&template_path.join("_index.md")).unwrap(); let template_section = library.get_section(&template_path.join("_index.md")).unwrap();
assert_eq!(template_section.subsections.len(), 2); assert_eq!(template_section.subsections.len(), 2);
assert_eq!(template_section.pages.len(), 2); assert_eq!(template_section.pages.len(), 2);
let from_section_config = site.library.get_page_by_key(template_section.pages[0]); let from_section_config = library.get_page_by_key(template_section.pages[0]);
assert_eq!(from_section_config.meta.template, Some("page_template.html".into())); assert_eq!(from_section_config.meta.template, Some("page_template.html".into()));
assert_eq!(from_section_config.meta.title, Some("From section config".into())); assert_eq!(from_section_config.meta.title, Some("From section config".into()));
let override_page_template = site.library.get_page_by_key(template_section.pages[1]); let override_page_template = library.get_page_by_key(template_section.pages[1]);
assert_eq!(override_page_template.meta.template, Some("page_template_override.html".into())); assert_eq!(override_page_template.meta.template, Some("page_template_override.html".into()));
assert_eq!(override_page_template.meta.title, Some("Override".into())); assert_eq!(override_page_template.meta.title, Some("Override".into()));
// It should have applied recursively as well // It should have applied recursively as well
let another_section = let another_section =
site.library.get_section(&template_path.join("another_section").join("_index.md")).unwrap(); library.get_section(&template_path.join("another_section").join("_index.md")).unwrap();
assert_eq!(another_section.subsections.len(), 0); assert_eq!(another_section.subsections.len(), 0);
assert_eq!(another_section.pages.len(), 1); assert_eq!(another_section.pages.len(), 1);
let changed_recursively = site.library.get_page_by_key(another_section.pages[0]); let changed_recursively = library.get_page_by_key(another_section.pages[0]);
assert_eq!(changed_recursively.meta.template, Some("page_template.html".into())); assert_eq!(changed_recursively.meta.template, Some("page_template.html".into()));
assert_eq!(changed_recursively.meta.title, Some("Changed recursively".into())); assert_eq!(changed_recursively.meta.title, Some("Changed recursively".into()));
// But it should not have override a children page_template // But it should not have override a children page_template
let yet_another_section = site let yet_another_section =
.library library.get_section(&template_path.join("yet_another_section").join("_index.md")).unwrap();
.get_section(&template_path.join("yet_another_section").join("_index.md"))
.unwrap();
assert_eq!(yet_another_section.subsections.len(), 0); assert_eq!(yet_another_section.subsections.len(), 0);
assert_eq!(yet_another_section.pages.len(), 1); assert_eq!(yet_another_section.pages.len(), 1);
let child = site.library.get_page_by_key(yet_another_section.pages[0]); let child = library.get_page_by_key(yet_another_section.pages[0]);
assert_eq!(child.meta.template, Some("page_template_child.html".into())); assert_eq!(child.meta.template, Some("page_template_child.html".into()));
assert_eq!(child.meta.title, Some("Local section override".into())); assert_eq!(child.meta.title, Some("Local section override".into()));
} }
// https://github.com/getzola/zola/issues/571
#[test]
fn can_build_site_custom_builtins_from_theme() {
let (_, _tmp_dir, public) = build_site("test_site");
assert!(&public.exists());
// 404.html is a theme template.
assert!(file_exists!(public, "404.html"));
assert!(file_contains!(public, "404.html", "Oops"));
}

View file

@ -0,0 +1,141 @@
extern crate site;
mod common;
use std::env;
use common::build_site;
use site::Site;
#[test]
fn can_parse_multilingual_site() {
let mut path = env::current_dir().unwrap().parent().unwrap().parent().unwrap().to_path_buf();
path.push("test_site_i18n");
let mut site = Site::new(&path, "config.toml").unwrap();
site.load().unwrap();
let library = site.library.read().unwrap();
assert_eq!(library.pages().len(), 10);
assert_eq!(library.sections().len(), 6);
// default index sections
let default_index_section =
library.get_section(&path.join("content").join("_index.md")).unwrap();
assert_eq!(default_index_section.pages.len(), 1);
assert!(default_index_section.ancestors.is_empty());
let fr_index_section = library.get_section(&path.join("content").join("_index.fr.md")).unwrap();
assert_eq!(fr_index_section.pages.len(), 1);
assert!(fr_index_section.ancestors.is_empty());
// blog sections get only their own language pages
let blog_path = path.join("content").join("blog");
let default_blog = library.get_section(&blog_path.join("_index.md")).unwrap();
assert_eq!(default_blog.subsections.len(), 0);
assert_eq!(default_blog.pages.len(), 4);
assert_eq!(
default_blog.ancestors,
vec![*library.get_section_key(&default_index_section.file.path).unwrap()]
);
for key in &default_blog.pages {
let page = library.get_page_by_key(*key);
assert_eq!(page.lang, "en");
}
let fr_blog = library.get_section(&blog_path.join("_index.fr.md")).unwrap();
assert_eq!(fr_blog.subsections.len(), 0);
assert_eq!(fr_blog.pages.len(), 3);
assert_eq!(
fr_blog.ancestors,
vec![*library.get_section_key(&fr_index_section.file.path).unwrap()]
);
for key in &fr_blog.pages {
let page = library.get_page_by_key(*key);
assert_eq!(page.lang, "fr");
}
}
#[test]
fn can_build_multilingual_site() {
let (_, _tmp_dir, public) = build_site("test_site_i18n");
assert!(public.exists());
// Index pages
assert!(file_exists!(public, "index.html"));
assert!(file_exists!(public, "fr/index.html"));
assert!(file_contains!(public, "fr/index.html", "Une page"));
assert!(file_contains!(public, "fr/index.html", "Language: fr"));
assert!(file_exists!(public, "base/index.html"));
assert!(file_exists!(public, "fr/base/index.html"));
// Sections are there as well, with translations info
assert!(file_exists!(public, "blog/index.html"));
assert!(file_contains!(
public,
"blog/index.html",
"Translated in fr: Mon blog https://example.com/fr/blog/"
));
assert!(file_contains!(
public,
"blog/index.html",
"Translated in it: Il mio blog https://example.com/it/blog/"
));
assert!(file_exists!(public, "fr/blog/index.html"));
assert!(file_contains!(public, "fr/blog/index.html", "Language: fr"));
assert!(file_contains!(
public,
"fr/blog/index.html",
"Translated in en: My blog https://example.com/blog/"
));
assert!(file_contains!(
public,
"fr/blog/index.html",
"Translated in it: Il mio blog https://example.com/it/blog/"
));
// Normal pages are there with the translations
assert!(file_exists!(public, "blog/something/index.html"));
assert!(file_contains!(
public,
"blog/something/index.html",
"Translated in fr: Quelque chose https://example.com/fr/blog/something/"
));
assert!(file_exists!(public, "fr/blog/something/index.html"));
assert!(file_contains!(public, "fr/blog/something/index.html", "Language: fr"));
assert!(file_contains!(
public,
"fr/blog/something/index.html",
"Translated in en: Something https://example.com/blog/something/"
));
// sitemap contains all languages
assert!(file_exists!(public, "sitemap.xml"));
assert!(file_contains!(public, "sitemap.xml", "https://example.com/blog/something-else/"));
assert!(file_contains!(public, "sitemap.xml", "https://example.com/fr/blog/something-else/"));
assert!(file_contains!(public, "sitemap.xml", "https://example.com/it/blog/something-else/"));
// one rss per language
assert!(file_exists!(public, "rss.xml"));
assert!(file_contains!(public, "rss.xml", "https://example.com/blog/something-else/"));
assert!(!file_contains!(public, "rss.xml", "https://example.com/fr/blog/something-else/"));
assert!(file_exists!(public, "fr/rss.xml"));
assert!(!file_contains!(public, "fr/rss.xml", "https://example.com/blog/something-else/"));
assert!(file_contains!(public, "fr/rss.xml", "https://example.com/fr/blog/something-else/"));
// Italian doesn't have RSS enabled
assert!(!file_exists!(public, "it/rss.xml"));
// Taxonomies are per-language
assert!(file_exists!(public, "authors/index.html"));
assert!(file_contains!(public, "authors/index.html", "Queen"));
assert!(!file_contains!(public, "authors/index.html", "Vincent"));
assert!(!file_exists!(public, "auteurs/index.html"));
assert!(file_exists!(public, "authors/queen-elizabeth/rss.xml"));
assert!(!file_exists!(public, "fr/authors/index.html"));
assert!(file_exists!(public, "fr/auteurs/index.html"));
assert!(!file_contains!(public, "fr/auteurs/index.html", "Queen"));
assert!(file_contains!(public, "fr/auteurs/index.html", "Vincent"));
assert!(!file_exists!(public, "fr/auteurs/vincent-prouillet/rss.xml"));
}

View file

@ -4,14 +4,13 @@ version = "0.1.0"
authors = ["Vincent Prouillet <prouillet.vincent@gmail.com>"] authors = ["Vincent Prouillet <prouillet.vincent@gmail.com>"]
[dependencies] [dependencies]
tera = "0.11" tera = "1.0.0-alpha.3"
base64 = "0.10" base64 = "0.10"
lazy_static = "1" lazy_static = "1"
pulldown-cmark = "0.2" pulldown-cmark = "0.2"
toml = "0.4" toml = "0.4"
csv = "1" csv = "1"
serde_json = "1.0" serde_json = "1.0"
error-chain = "0.12"
reqwest = "0.9" reqwest = "0.9"
url = "1.5" url = "1.5"

View file

@ -1 +1,2 @@
User-agent: * User-agent: *
Sitemap: {{ get_url(path="sitemap.xml") }}

View file

@ -1,22 +1,10 @@
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
{% for page in pages %} {% for sitemap_entry in entries %}
<url> <url>
<loc>{{ page.permalink | safe }}</loc> <loc>{{ sitemap_entry.permalink | safe }}</loc>
{% if page.date %} {% if sitemap_entry.date %}
<lastmod>{{ page.date }}</lastmod> <lastmod>{{ sitemap_entry.date }}</lastmod>
{% endif %} {% endif %}
</url> </url>
{% endfor %} {% endfor %}
{% for section in sections %}
<url>
<loc>{{ section.permalink | safe }}</loc>
</url>
{% endfor %}
{% for taxonomy in taxonomies %}
{% for entry in taxonomy %}
<url>
<loc>{{ entry.permalink | safe }}</loc>
</url>
{% endfor %}
{% endfor %}
</urlset> </urlset>

View file

@ -0,0 +1,7 @@
<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
{% for sitemap in sitemaps %}
<sitemap>
<loc>{{ sitemap }}</loc>
</sitemap>
{% endfor %}
</sitemapindex>

View file

@ -4,7 +4,7 @@ use base64::{decode, encode};
use pulldown_cmark as cmark; use pulldown_cmark as cmark;
use tera::{to_value, Result as TeraResult, Value}; use tera::{to_value, Result as TeraResult, Value};
pub fn markdown(value: Value, args: HashMap<String, Value>) -> TeraResult<Value> { pub fn markdown(value: &Value, args: &HashMap<String, Value>) -> TeraResult<Value> {
let s = try_get_value!("markdown", "value", String, value); let s = try_get_value!("markdown", "value", String, value);
let inline = match args.get("inline") { let inline = match args.get("inline") {
Some(val) => try_get_value!("markdown", "inline", bool, val), Some(val) => try_get_value!("markdown", "inline", bool, val),
@ -21,21 +21,21 @@ pub fn markdown(value: Value, args: HashMap<String, Value>) -> TeraResult<Value>
if inline { if inline {
html = html html = html
.trim_left_matches("<p>") .trim_start_matches("<p>")
// pulldown_cmark finishes a paragraph with `</p>\n` // pulldown_cmark finishes a paragraph with `</p>\n`
.trim_right_matches("</p>\n") .trim_end_matches("</p>\n")
.to_string(); .to_string();
} }
Ok(to_value(&html).unwrap()) Ok(to_value(&html).unwrap())
} }
pub fn base64_encode(value: Value, _: HashMap<String, Value>) -> TeraResult<Value> { pub fn base64_encode(value: &Value, _: &HashMap<String, Value>) -> TeraResult<Value> {
let s = try_get_value!("base64_encode", "value", String, value); let s = try_get_value!("base64_encode", "value", String, value);
Ok(to_value(&encode(s.as_bytes())).unwrap()) Ok(to_value(&encode(s.as_bytes())).unwrap())
} }
pub fn base64_decode(value: Value, _: HashMap<String, Value>) -> TeraResult<Value> { pub fn base64_decode(value: &Value, _: &HashMap<String, Value>) -> TeraResult<Value> {
let s = try_get_value!("base64_decode", "value", String, value); let s = try_get_value!("base64_decode", "value", String, value);
Ok(to_value(&String::from_utf8(decode(s.as_bytes()).unwrap()).unwrap()).unwrap()) Ok(to_value(&String::from_utf8(decode(s.as_bytes()).unwrap()).unwrap()).unwrap())
} }
@ -50,7 +50,7 @@ mod tests {
#[test] #[test]
fn markdown_filter() { fn markdown_filter() {
let result = markdown(to_value(&"# Hey").unwrap(), HashMap::new()); let result = markdown(&to_value(&"# Hey").unwrap(), &HashMap::new());
assert!(result.is_ok()); assert!(result.is_ok());
assert_eq!(result.unwrap(), to_value(&"<h1>Hey</h1>\n").unwrap()); assert_eq!(result.unwrap(), to_value(&"<h1>Hey</h1>\n").unwrap());
} }
@ -60,8 +60,8 @@ mod tests {
let mut args = HashMap::new(); let mut args = HashMap::new();
args.insert("inline".to_string(), to_value(true).unwrap()); args.insert("inline".to_string(), to_value(true).unwrap());
let result = markdown( let result = markdown(
to_value(&"Using `map`, `filter`, and `fold` instead of `for`").unwrap(), &to_value(&"Using `map`, `filter`, and `fold` instead of `for`").unwrap(),
args, &args,
); );
assert!(result.is_ok()); assert!(result.is_ok());
assert_eq!(result.unwrap(), to_value(&"Using <code>map</code>, <code>filter</code>, and <code>fold</code> instead of <code>for</code>").unwrap()); assert_eq!(result.unwrap(), to_value(&"Using <code>map</code>, <code>filter</code>, and <code>fold</code> instead of <code>for</code>").unwrap());
@ -73,7 +73,7 @@ mod tests {
let mut args = HashMap::new(); let mut args = HashMap::new();
args.insert("inline".to_string(), to_value(true).unwrap()); args.insert("inline".to_string(), to_value(true).unwrap());
let result = markdown( let result = markdown(
to_value( &to_value(
&r#" &r#"
|id|author_id| timestamp_created|title |content | |id|author_id| timestamp_created|title |content |
|-:|--------:|-----------------------:|:---------------------|:-----------------| |-:|--------:|-----------------------:|:---------------------|:-----------------|
@ -82,7 +82,7 @@ mod tests {
"#, "#,
) )
.unwrap(), .unwrap(),
args, &args,
); );
assert!(result.is_ok()); assert!(result.is_ok());
assert!(result.unwrap().as_str().unwrap().contains("<table>")); assert!(result.unwrap().as_str().unwrap().contains("<table>"));
@ -102,7 +102,7 @@ mod tests {
]; ];
for (input, expected) in tests { for (input, expected) in tests {
let args = HashMap::new(); let args = HashMap::new();
let result = base64_encode(to_value(input).unwrap(), args); let result = base64_encode(&to_value(input).unwrap(), &args);
assert!(result.is_ok()); assert!(result.is_ok());
assert_eq!(result.unwrap(), to_value(expected).unwrap()); assert_eq!(result.unwrap(), to_value(expected).unwrap());
} }
@ -121,7 +121,7 @@ mod tests {
]; ];
for (input, expected) in tests { for (input, expected) in tests {
let args = HashMap::new(); let args = HashMap::new();
let result = base64_decode(to_value(input).unwrap(), args); let result = base64_decode(&to_value(input).unwrap(), &args);
assert!(result.is_ok()); assert!(result.is_ok());
assert_eq!(result.unwrap(), to_value(expected).unwrap()); assert_eq!(result.unwrap(), to_value(expected).unwrap());
} }

View file

@ -16,7 +16,7 @@ use std::sync::{Arc, Mutex};
use csv::Reader; use csv::Reader;
use std::collections::HashMap; use std::collections::HashMap;
use tera::{from_value, to_value, Error, GlobalFn, Map, Result, Value}; use tera::{from_value, to_value, Error, Function as TeraFn, Map, Result, Value};
static GET_DATA_ARGUMENT_ERROR_MESSAGE: &str = static GET_DATA_ARGUMENT_ERROR_MESSAGE: &str =
"`load_data`: requires EITHER a `path` or `url` argument"; "`load_data`: requires EITHER a `path` or `url` argument";
@ -50,24 +50,24 @@ impl FromStr for OutputFormat {
type Err = Error; type Err = Error;
fn from_str(output_format: &str) -> Result<Self> { fn from_str(output_format: &str) -> Result<Self> {
return match output_format { match output_format {
"toml" => Ok(OutputFormat::Toml), "toml" => Ok(OutputFormat::Toml),
"csv" => Ok(OutputFormat::Csv), "csv" => Ok(OutputFormat::Csv),
"json" => Ok(OutputFormat::Json), "json" => Ok(OutputFormat::Json),
"plain" => Ok(OutputFormat::Plain), "plain" => Ok(OutputFormat::Plain),
format => Err(format!("Unknown output format {}", format).into()), format => Err(format!("Unknown output format {}", format).into()),
}; }
} }
} }
impl OutputFormat { impl OutputFormat {
fn as_accept_header(&self) -> header::HeaderValue { fn as_accept_header(&self) -> header::HeaderValue {
return header::HeaderValue::from_static(match self { header::HeaderValue::from_static(match self {
OutputFormat::Json => "application/json", OutputFormat::Json => "application/json",
OutputFormat::Csv => "text/csv", OutputFormat::Csv => "text/csv",
OutputFormat::Toml => "application/toml", OutputFormat::Toml => "application/toml",
OutputFormat::Plain => "text/plain", OutputFormat::Plain => "text/plain",
}); })
} }
} }
@ -91,18 +91,18 @@ impl DataSource {
if let Some(url) = url_arg { if let Some(url) = url_arg {
return Url::parse(&url) return Url::parse(&url)
.map(|parsed_url| DataSource::Url(parsed_url)) .map(DataSource::Url)
.map_err(|e| format!("Failed to parse {} as url: {}", url, e).into()); .map_err(|e| format!("Failed to parse {} as url: {}", url, e).into());
} }
return Err(GET_DATA_ARGUMENT_ERROR_MESSAGE.into()); Err(GET_DATA_ARGUMENT_ERROR_MESSAGE.into())
} }
fn get_cache_key(&self, format: &OutputFormat) -> u64 { fn get_cache_key(&self, format: &OutputFormat) -> u64 {
let mut hasher = DefaultHasher::new(); let mut hasher = DefaultHasher::new();
format.hash(&mut hasher); format.hash(&mut hasher);
self.hash(&mut hasher); self.hash(&mut hasher);
return hasher.finish(); hasher.finish()
} }
} }
@ -123,10 +123,9 @@ fn get_data_source_from_args(
args: &HashMap<String, Value>, args: &HashMap<String, Value>,
) -> Result<DataSource> { ) -> Result<DataSource> {
let path_arg = optional_arg!(String, args.get("path"), GET_DATA_ARGUMENT_ERROR_MESSAGE); let path_arg = optional_arg!(String, args.get("path"), GET_DATA_ARGUMENT_ERROR_MESSAGE);
let url_arg = optional_arg!(String, args.get("url"), GET_DATA_ARGUMENT_ERROR_MESSAGE); let url_arg = optional_arg!(String, args.get("url"), GET_DATA_ARGUMENT_ERROR_MESSAGE);
return DataSource::from_args(path_arg, url_arg, content_path); DataSource::from_args(path_arg, url_arg, content_path)
} }
fn read_data_file(base_path: &PathBuf, full_path: PathBuf) -> Result<String> { fn read_data_file(base_path: &PathBuf, full_path: PathBuf) -> Result<String> {
@ -140,9 +139,9 @@ fn read_data_file(base_path: &PathBuf, full_path: PathBuf) -> Result<String> {
) )
.into()); .into());
} }
return read_file(&full_path).map_err(|e| { read_file(&full_path).map_err(|e| {
format!("`load_data`: error {} loading file {}", full_path.to_str().unwrap(), e).into() format!("`load_data`: error {} loading file {}", full_path.to_str().unwrap(), e).into()
}); })
} }
fn get_output_format_from_args( fn get_output_format_from_args(
@ -152,47 +151,56 @@ fn get_output_format_from_args(
let format_arg = optional_arg!( let format_arg = optional_arg!(
String, String,
args.get("format"), args.get("format"),
"`load_data`: `format` needs to be an argument with a string value, being one of the supported `load_data` file types (csv, json, toml)" "`load_data`: `format` needs to be an argument with a string value, being one of the supported `load_data` file types (csv, json, toml, plain)"
); );
if let Some(format) = format_arg { if let Some(format) = format_arg {
if format == "plain" {
return Ok(OutputFormat::Plain);
}
return OutputFormat::from_str(&format); return OutputFormat::from_str(&format);
} }
let from_extension = if let DataSource::Path(path) = data_source { let from_extension = if let DataSource::Path(path) = data_source {
let extension_result: Result<&str> = path.extension().map(|extension| extension.to_str().unwrap()).unwrap_or_else(|| "plain")
path.extension().map(|extension| extension.to_str().unwrap()).ok_or(
format!("Could not determine format for {} from extension", path.display()).into(),
);
extension_result?
} else { } else {
"plain" "plain"
}; };
return OutputFormat::from_str(from_extension);
// Always default to Plain if we don't know what it is
OutputFormat::from_str(from_extension).or_else(|_| Ok(OutputFormat::Plain))
} }
/// A global function to load data from a file or from a URL /// A Tera function to load data from a file or from a URL
/// Currently the supported formats are json, toml, csv and plain text /// Currently the supported formats are json, toml, csv and plain text
pub fn make_load_data(content_path: PathBuf, base_path: PathBuf) -> GlobalFn { #[derive(Debug)]
let mut headers = header::HeaderMap::new(); pub struct LoadData {
headers.insert(header::USER_AGENT, "zola".parse().unwrap()); base_path: PathBuf,
client: Arc<Mutex<Client>>,
result_cache: Arc<Mutex<HashMap<u64, Value>>>,
}
impl LoadData {
pub fn new(base_path: PathBuf) -> Self {
let client = Arc::new(Mutex::new(Client::builder().build().expect("reqwest client build"))); let client = Arc::new(Mutex::new(Client::builder().build().expect("reqwest client build")));
let result_cache: Arc<Mutex<HashMap<u64, Value>>> = Arc::new(Mutex::new(HashMap::new())); let result_cache = Arc::new(Mutex::new(HashMap::new()));
Box::new(move |args| -> Result<Value> { Self { base_path, client, result_cache }
let data_source = get_data_source_from_args(&content_path, &args)?; }
}
impl TeraFn for LoadData {
fn call(&self, args: &HashMap<String, Value>) -> Result<Value> {
let data_source = get_data_source_from_args(&self.base_path, &args)?;
let file_format = get_output_format_from_args(&args, &data_source)?; let file_format = get_output_format_from_args(&args, &data_source)?;
let cache_key = data_source.get_cache_key(&file_format); let cache_key = data_source.get_cache_key(&file_format);
let mut cache = result_cache.lock().expect("result cache lock"); let mut cache = self.result_cache.lock().expect("result cache lock");
let response_client = client.lock().expect("response client lock"); let response_client = self.client.lock().expect("response client lock");
if let Some(cached_result) = cache.get(&cache_key) { if let Some(cached_result) = cache.get(&cache_key) {
return Ok(cached_result.clone()); return Ok(cached_result.clone());
} }
let data = match data_source { let data = match data_source {
DataSource::Path(path) => read_data_file(&base_path, path), DataSource::Path(path) => read_data_file(&self.base_path, path),
DataSource::Url(url) => { DataSource::Url(url) => {
let mut response = response_client let mut response = response_client
.get(url.as_str()) .get(url.as_str())
@ -224,14 +232,14 @@ pub fn make_load_data(content_path: PathBuf, base_path: PathBuf) -> GlobalFn {
} }
result_value result_value
}) }
} }
/// Parse a JSON string and convert it to a Tera Value /// Parse a JSON string and convert it to a Tera Value
fn load_json(json_data: String) -> Result<Value> { fn load_json(json_data: String) -> Result<Value> {
let json_content: Value = let json_content: Value =
serde_json::from_str(json_data.as_str()).map_err(|e| format!("{:?}", e))?; serde_json::from_str(json_data.as_str()).map_err(|e| format!("{:?}", e))?;
return Ok(json_content); Ok(json_content)
} }
/// Parse a TOML string and convert it to a Tera Value /// Parse a TOML string and convert it to a Tera Value
@ -283,7 +291,16 @@ fn load_csv(csv_data: String) -> Result<Value> {
let mut records_array: Vec<Value> = Vec::new(); let mut records_array: Vec<Value> = Vec::new();
for result in records { for result in records {
let record = result.unwrap(); let record = match result {
Ok(r) => r,
Err(e) => {
return Err(tera::Error::chain(
String::from("Error encountered when parsing csv records"),
e,
));
}
};
let mut elements_array: Vec<Value> = Vec::new(); let mut elements_array: Vec<Value> = Vec::new();
for e in record.into_iter() { for e in record.into_iter() {
@ -302,12 +319,12 @@ fn load_csv(csv_data: String) -> Result<Value> {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::{make_load_data, DataSource, OutputFormat}; use super::{DataSource, LoadData, OutputFormat};
use std::collections::HashMap; use std::collections::HashMap;
use std::path::PathBuf; use std::path::PathBuf;
use tera::to_value; use tera::{to_value, Function};
fn get_test_file(filename: &str) -> PathBuf { fn get_test_file(filename: &str) -> PathBuf {
let test_files = PathBuf::from("../utils/test-files").canonicalize().unwrap(); let test_files = PathBuf::from("../utils/test-files").canonicalize().unwrap();
@ -316,27 +333,25 @@ mod tests {
#[test] #[test]
fn fails_when_missing_file() { fn fails_when_missing_file() {
let static_fn = let static_fn = LoadData::new(PathBuf::from("../utils"));
make_load_data(PathBuf::from("../utils/test-files"), PathBuf::from("../utils"));
let mut args = HashMap::new(); let mut args = HashMap::new();
args.insert("path".to_string(), to_value("../../../READMEE.md").unwrap()); args.insert("path".to_string(), to_value("../../../READMEE.md").unwrap());
let result = static_fn(args); let result = static_fn.call(&args);
assert!(result.is_err()); assert!(result.is_err());
assert!(result.unwrap_err().description().contains("READMEE.md doesn't exist")); assert!(result.unwrap_err().to_string().contains("READMEE.md doesn't exist"));
} }
#[test] #[test]
fn cant_load_outside_content_dir() { fn cant_load_outside_content_dir() {
let static_fn = let static_fn = LoadData::new(PathBuf::from(PathBuf::from("../utils")));
make_load_data(PathBuf::from("../utils/test-files"), PathBuf::from("../utils"));
let mut args = HashMap::new(); let mut args = HashMap::new();
args.insert("path".to_string(), to_value("../../../README.md").unwrap()); args.insert("path".to_string(), to_value("../../README.md").unwrap());
args.insert("format".to_string(), to_value("plain").unwrap()); args.insert("format".to_string(), to_value("plain").unwrap());
let result = static_fn(args); let result = static_fn.call(&args);
assert!(result.is_err()); assert!(result.is_err());
assert!(result assert!(result
.unwrap_err() .unwrap_err()
.description() .to_string()
.contains("README.md is not inside the base site directory")); .contains("README.md is not inside the base site directory"));
} }
@ -378,11 +393,11 @@ mod tests {
#[test] #[test]
fn can_load_remote_data() { fn can_load_remote_data() {
let static_fn = make_load_data(PathBuf::new(), PathBuf::new()); let static_fn = LoadData::new(PathBuf::new());
let mut args = HashMap::new(); let mut args = HashMap::new();
args.insert("url".to_string(), to_value("https://httpbin.org/json").unwrap()); args.insert("url".to_string(), to_value("https://httpbin.org/json").unwrap());
args.insert("format".to_string(), to_value("json").unwrap()); args.insert("format".to_string(), to_value("json").unwrap());
let result = static_fn(args).unwrap(); let result = static_fn.call(&args).unwrap();
assert_eq!( assert_eq!(
result.get("slideshow").unwrap().get("title").unwrap(), result.get("slideshow").unwrap().get("title").unwrap(),
&to_value("Sample Slide Show").unwrap() &to_value("Sample Slide Show").unwrap()
@ -391,29 +406,26 @@ mod tests {
#[test] #[test]
fn fails_when_request_404s() { fn fails_when_request_404s() {
let static_fn = make_load_data(PathBuf::new(), PathBuf::new()); let static_fn = LoadData::new(PathBuf::new());
let mut args = HashMap::new(); let mut args = HashMap::new();
args.insert("url".to_string(), to_value("https://httpbin.org/status/404/").unwrap()); args.insert("url".to_string(), to_value("https://httpbin.org/status/404/").unwrap());
args.insert("format".to_string(), to_value("json").unwrap()); args.insert("format".to_string(), to_value("json").unwrap());
let result = static_fn(args); let result = static_fn.call(&args);
assert!(result.is_err()); assert!(result.is_err());
assert_eq!( assert_eq!(
result.unwrap_err().description(), result.unwrap_err().to_string(),
"Failed to request https://httpbin.org/status/404/: 404 Not Found" "Failed to request https://httpbin.org/status/404/: 404 Not Found"
); );
} }
#[test] #[test]
fn can_load_toml() { fn can_load_toml() {
let static_fn = make_load_data( let static_fn = LoadData::new(PathBuf::from("../utils/test-files"));
PathBuf::from("../utils/test-files"),
PathBuf::from("../utils/test-files"),
);
let mut args = HashMap::new(); let mut args = HashMap::new();
args.insert("path".to_string(), to_value("test.toml").unwrap()); args.insert("path".to_string(), to_value("test.toml").unwrap());
let result = static_fn(args.clone()).unwrap(); let result = static_fn.call(&args.clone()).unwrap();
//TOML does not load in order // TOML does not load in order
assert_eq!( assert_eq!(
result, result,
json!({ json!({
@ -426,14 +438,52 @@ mod tests {
} }
#[test] #[test]
fn can_load_csv() { fn unknown_extension_defaults_to_plain() {
let static_fn = make_load_data( let static_fn = LoadData::new(PathBuf::from("../utils/test-files"));
PathBuf::from("../utils/test-files"), let mut args = HashMap::new();
PathBuf::from("../utils/test-files"), args.insert("path".to_string(), to_value("test.css").unwrap());
let result = static_fn.call(&args.clone()).unwrap();
assert_eq!(
result,
".hello {}\n",
); );
}
#[test]
fn can_override_known_extension_with_format() {
let static_fn = LoadData::new(PathBuf::from("../utils/test-files"));
let mut args = HashMap::new(); let mut args = HashMap::new();
args.insert("path".to_string(), to_value("test.csv").unwrap()); args.insert("path".to_string(), to_value("test.csv").unwrap());
let result = static_fn(args.clone()).unwrap(); args.insert("format".to_string(), to_value("plain").unwrap());
let result = static_fn.call(&args.clone()).unwrap();
assert_eq!(
result,
"Number,Title\n1,Gutenberg\n2,Printing",
);
}
#[test]
fn will_use_format_on_unknown_extension() {
let static_fn = LoadData::new(PathBuf::from("../utils/test-files"));
let mut args = HashMap::new();
args.insert("path".to_string(), to_value("test.css").unwrap());
args.insert("format".to_string(), to_value("plain").unwrap());
let result = static_fn.call(&args.clone()).unwrap();
assert_eq!(
result,
".hello {}\n",
);
}
#[test]
fn can_load_csv() {
let static_fn = LoadData::new(PathBuf::from("../utils/test-files"));
let mut args = HashMap::new();
args.insert("path".to_string(), to_value("test.csv").unwrap());
let result = static_fn.call(&args.clone()).unwrap();
assert_eq!( assert_eq!(
result, result,
@ -447,15 +497,33 @@ mod tests {
) )
} }
// Test points to bad csv file with uneven row lengths
#[test]
fn bad_csv_should_result_in_error() {
let static_fn = LoadData::new(PathBuf::from("../utils/test-files"));
let mut args = HashMap::new();
args.insert("path".to_string(), to_value("uneven_rows.csv").unwrap());
let result = static_fn.call(&args.clone());
assert!(result.is_err());
let error_kind = result.err().unwrap().kind;
match error_kind {
tera::ErrorKind::Msg(msg) => {
if msg != String::from("Error encountered when parsing csv records") {
panic!("Error message is wrong. Perhaps wrong error is being returned?");
}
}
_ => panic!("Error encountered was not expected CSV error"),
}
}
#[test] #[test]
fn can_load_json() { fn can_load_json() {
let static_fn = make_load_data( let static_fn = LoadData::new(PathBuf::from("../utils/test-files"));
PathBuf::from("../utils/test-files"),
PathBuf::from("../utils/test-files"),
);
let mut args = HashMap::new(); let mut args = HashMap::new();
args.insert("path".to_string(), to_value("test.json").unwrap()); args.insert("path".to_string(), to_value("test.json").unwrap());
let result = static_fn(args.clone()).unwrap(); let result = static_fn.call(&args.clone()).unwrap();
assert_eq!( assert_eq!(
result, result,

View file

@ -1,9 +1,8 @@
extern crate error_chain;
use std::collections::HashMap; use std::collections::HashMap;
use std::sync::{Arc, Mutex}; use std::path::PathBuf;
use std::sync::{Arc, Mutex, RwLock};
use tera::{from_value, to_value, GlobalFn, Result, Value}; use tera::{from_value, to_value, Function as TeraFn, Result, Value};
use config::Config; use config::Config;
use library::{Library, Taxonomy}; use library::{Library, Taxonomy};
@ -16,82 +15,39 @@ mod macros;
mod load_data; mod load_data;
pub use self::load_data::make_load_data; pub use self::load_data::LoadData;
pub fn make_trans(config: Config) -> GlobalFn { #[derive(Debug)]
let translations_config = config.translations; pub struct Trans {
let default_lang = config.default_language.clone(); config: Config,
}
Box::new(move |args| -> Result<Value> { impl Trans {
pub fn new(config: Config) -> Self {
Self { config }
}
}
impl TeraFn for Trans {
fn call(&self, args: &HashMap<String, Value>) -> Result<Value> {
let key = required_arg!(String, args.get("key"), "`trans` requires a `key` argument."); let key = required_arg!(String, args.get("key"), "`trans` requires a `key` argument.");
let lang = optional_arg!(String, args.get("lang"), "`trans`: `lang` must be a string.") let lang = optional_arg!(String, args.get("lang"), "`trans`: `lang` must be a string.")
.unwrap_or_else(|| default_lang.clone()); .unwrap_or_else(|| self.config.default_language.clone());
let translations = &translations_config[lang.as_str()]; let translations = &self.config.translations[lang.as_str()];
Ok(to_value(&translations[key.as_str()]).unwrap()) Ok(to_value(&translations[key.as_str()]).unwrap())
}) }
} }
pub fn make_get_page(library: &Library) -> GlobalFn { #[derive(Debug)]
let mut pages = HashMap::new(); pub struct GetUrl {
for page in library.pages_values() { config: Config,
pages.insert( permalinks: HashMap<String, String>,
page.file.relative.clone(),
to_value(library.get_page(&page.file.path).unwrap().to_serialized(library)).unwrap(),
);
}
Box::new(move |args| -> Result<Value> {
let path = required_arg!(
String,
args.get("path"),
"`get_page` requires a `path` argument with a string value"
);
match pages.get(&path) {
Some(p) => Ok(p.clone()),
None => Err(format!("Page `{}` not found.", path).into()),
}
})
} }
impl GetUrl {
pub fn make_get_section(library: &Library) -> GlobalFn { pub fn new(config: Config, permalinks: HashMap<String, String>) -> Self {
let mut sections = HashMap::new(); Self { config, permalinks }
let mut sections_basic = HashMap::new();
for section in library.sections_values() {
sections.insert(
section.file.relative.clone(),
to_value(library.get_section(&section.file.path).unwrap().to_serialized(library))
.unwrap(),
);
sections_basic.insert(
section.file.relative.clone(),
to_value(library.get_section(&section.file.path).unwrap().to_serialized_basic(library))
.unwrap(),
);
} }
Box::new(move |args| -> Result<Value> {
let path = required_arg!(
String,
args.get("path"),
"`get_section` requires a `path` argument with a string value"
);
let metadata_only = args
.get("metadata_only")
.map_or(false, |c| from_value::<bool>(c.clone()).unwrap_or(false));
let container = if metadata_only { &sections_basic } else { &sections };
match container.get(&path) {
Some(p) => Ok(p.clone()),
None => Err(format!("Section `{}` not found.", path).into()),
}
})
} }
impl TeraFn for GetUrl {
pub fn make_get_url(permalinks: HashMap<String, String>, config: Config) -> GlobalFn { fn call(&self, args: &HashMap<String, Value>) -> Result<Value> {
Box::new(move |args| -> Result<Value> {
let cachebust = let cachebust =
args.get("cachebust").map_or(false, |c| from_value::<bool>(c.clone()).unwrap_or(false)); args.get("cachebust").map_or(false, |c| from_value::<bool>(c.clone()).unwrap_or(false));
@ -105,7 +61,7 @@ pub fn make_get_url(permalinks: HashMap<String, String>, config: Config) -> Glob
"`get_url` requires a `path` argument with a string value" "`get_url` requires a `path` argument with a string value"
); );
if path.starts_with("./") { if path.starts_with("./") {
match resolve_internal_link(&path, &permalinks) { match resolve_internal_link(&path, &self.permalinks) {
Ok(url) => Ok(to_value(url).unwrap()), Ok(url) => Ok(to_value(url).unwrap()),
Err(_) => { Err(_) => {
Err(format!("Could not resolve URL for link `{}` not found.", path).into()) Err(format!("Could not resolve URL for link `{}` not found.", path).into())
@ -113,90 +69,35 @@ pub fn make_get_url(permalinks: HashMap<String, String>, config: Config) -> Glob
} }
} else { } else {
// anything else // anything else
let mut permalink = config.make_permalink(&path); let mut permalink = self.config.make_permalink(&path);
if !trailing_slash && permalink.ends_with('/') { if !trailing_slash && permalink.ends_with('/') {
permalink.pop(); // Removes the slash permalink.pop(); // Removes the slash
} }
if cachebust { if cachebust {
permalink = format!("{}?t={}", permalink, config.build_timestamp.unwrap()); permalink = format!("{}?t={}", permalink, self.config.build_timestamp.unwrap());
} }
Ok(to_value(permalink).unwrap()) Ok(to_value(permalink).unwrap())
} }
}) }
} }
pub fn make_get_taxonomy(all_taxonomies: &[Taxonomy], library: &Library) -> GlobalFn { #[derive(Debug)]
let mut taxonomies = HashMap::new(); pub struct ResizeImage {
for taxonomy in all_taxonomies { imageproc: Arc<Mutex<imageproc::Processor>>,
taxonomies }
.insert(taxonomy.kind.name.clone(), to_value(taxonomy.to_serialized(library)).unwrap()); impl ResizeImage {
pub fn new(imageproc: Arc<Mutex<imageproc::Processor>>) -> Self {
Self { imageproc }
} }
Box::new(move |args| -> Result<Value> {
let kind = required_arg!(
String,
args.get("kind"),
"`get_taxonomy` requires a `kind` argument with a string value"
);
let container = match taxonomies.get(&kind) {
Some(c) => c,
None => {
return Err(
format!("`get_taxonomy` received an unknown taxonomy as kind: {}", kind).into()
);
}
};
Ok(to_value(container).unwrap())
})
} }
pub fn make_get_taxonomy_url(all_taxonomies: &[Taxonomy]) -> GlobalFn { static DEFAULT_OP: &'static str = "fill";
let mut taxonomies = HashMap::new(); static DEFAULT_FMT: &'static str = "auto";
for taxonomy in all_taxonomies { const DEFAULT_Q: u8 = 75;
let mut items = HashMap::new();
for item in &taxonomy.items {
items.insert(item.name.clone(), item.permalink.clone());
}
taxonomies.insert(taxonomy.kind.name.clone(), items);
}
Box::new(move |args| -> Result<Value> { impl TeraFn for ResizeImage {
let kind = required_arg!( fn call(&self, args: &HashMap<String, Value>) -> Result<Value> {
String,
args.get("kind"),
"`get_taxonomy_url` requires a `kind` argument with a string value"
);
let name = required_arg!(
String,
args.get("name"),
"`get_taxonomy_url` requires a `name` argument with a string value"
);
let container = match taxonomies.get(&kind) {
Some(c) => c,
None => {
return Err(format!(
"`get_taxonomy_url` received an unknown taxonomy as kind: {}",
kind
)
.into());
}
};
if let Some(permalink) = container.get(&name) {
return Ok(to_value(permalink).unwrap());
}
Err(format!("`get_taxonomy_url`: couldn't find `{}` in `{}` taxonomy", name, kind).into())
})
}
pub fn make_resize_image(imageproc: Arc<Mutex<imageproc::Processor>>) -> GlobalFn {
static DEFAULT_OP: &'static str = "fill";
const DEFAULT_Q: u8 = 75;
Box::new(move |args| -> Result<Value> {
let path = required_arg!( let path = required_arg!(
String, String,
args.get("path"), args.get("path"),
@ -214,6 +115,11 @@ pub fn make_resize_image(imageproc: Arc<Mutex<imageproc::Processor>>) -> GlobalF
); );
let op = optional_arg!(String, args.get("op"), "`resize_image`: `op` must be a string") let op = optional_arg!(String, args.get("op"), "`resize_image`: `op` must be a string")
.unwrap_or_else(|| DEFAULT_OP.to_string()); .unwrap_or_else(|| DEFAULT_OP.to_string());
let format =
optional_arg!(String, args.get("format"), "`resize_image`: `format` must be a string")
.unwrap_or_else(|| DEFAULT_FMT.to_string());
let quality = let quality =
optional_arg!(u8, args.get("quality"), "`resize_image`: `quality` must be a number") optional_arg!(u8, args.get("quality"), "`resize_image`: `quality` must be a number")
.unwrap_or(DEFAULT_Q); .unwrap_or(DEFAULT_Q);
@ -221,26 +127,170 @@ pub fn make_resize_image(imageproc: Arc<Mutex<imageproc::Processor>>) -> GlobalF
return Err("`resize_image`: `quality` must be in range 1-100".to_string().into()); return Err("`resize_image`: `quality` must be in range 1-100".to_string().into());
} }
let mut imageproc = imageproc.lock().unwrap(); let mut imageproc = self.imageproc.lock().unwrap();
if !imageproc.source_exists(&path) { if !imageproc.source_exists(&path) {
return Err(format!("`resize_image`: Cannot find path: {}", path).into()); return Err(format!("`resize_image`: Cannot find path: {}", path).into());
} }
let imageop = imageproc::ImageOp::from_args(path, &op, width, height, quality) let imageop = imageproc::ImageOp::from_args(path, &op, width, height, &format, quality)
.map_err(|e| format!("`resize_image`: {}", e))?; .map_err(|e| format!("`resize_image`: {}", e))?;
let url = imageproc.insert(imageop); let url = imageproc.insert(imageop);
to_value(url).map_err(|err| err.into()) to_value(url).map_err(|err| err.into())
}) }
}
#[derive(Debug)]
pub struct GetTaxonomyUrl {
taxonomies: HashMap<String, HashMap<String, String>>,
}
impl GetTaxonomyUrl {
pub fn new(all_taxonomies: &[Taxonomy]) -> Self {
let mut taxonomies = HashMap::new();
for taxonomy in all_taxonomies {
let mut items = HashMap::new();
for item in &taxonomy.items {
items.insert(item.name.clone(), item.permalink.clone());
}
taxonomies.insert(taxonomy.kind.name.clone(), items);
}
Self { taxonomies }
}
}
impl TeraFn for GetTaxonomyUrl {
fn call(&self, args: &HashMap<String, Value>) -> Result<Value> {
let kind = required_arg!(
String,
args.get("kind"),
"`get_taxonomy_url` requires a `kind` argument with a string value"
);
let name = required_arg!(
String,
args.get("name"),
"`get_taxonomy_url` requires a `name` argument with a string value"
);
let container = match self.taxonomies.get(&kind) {
Some(c) => c,
None => {
return Err(format!(
"`get_taxonomy_url` received an unknown taxonomy as kind: {}",
kind
)
.into());
}
};
if let Some(permalink) = container.get(&name) {
return Ok(to_value(permalink).unwrap());
}
Err(format!("`get_taxonomy_url`: couldn't find `{}` in `{}` taxonomy", name, kind).into())
}
}
#[derive(Debug)]
pub struct GetPage {
base_path: PathBuf,
library: Arc<RwLock<Library>>,
}
impl GetPage {
pub fn new(base_path: PathBuf, library: Arc<RwLock<Library>>) -> Self {
Self { base_path: base_path.join("content"), library }
}
}
impl TeraFn for GetPage {
fn call(&self, args: &HashMap<String, Value>) -> Result<Value> {
let path = required_arg!(
String,
args.get("path"),
"`get_page` requires a `path` argument with a string value"
);
let full_path = self.base_path.join(&path);
let library = self.library.read().unwrap();
match library.get_page(&full_path) {
Some(p) => Ok(to_value(p.to_serialized(&library)).unwrap()),
None => Err(format!("Page `{}` not found.", path).into()),
}
}
}
#[derive(Debug)]
pub struct GetSection {
base_path: PathBuf,
library: Arc<RwLock<Library>>,
}
impl GetSection {
pub fn new(base_path: PathBuf, library: Arc<RwLock<Library>>) -> Self {
Self { base_path: base_path.join("content"), library }
}
}
impl TeraFn for GetSection {
fn call(&self, args: &HashMap<String, Value>) -> Result<Value> {
let path = required_arg!(
String,
args.get("path"),
"`get_section` requires a `path` argument with a string value"
);
let metadata_only = args
.get("metadata_only")
.map_or(false, |c| from_value::<bool>(c.clone()).unwrap_or(false));
let full_path = self.base_path.join(&path);
let library = self.library.read().unwrap();
match library.get_section(&full_path) {
Some(s) => {
if metadata_only {
Ok(to_value(s.to_serialized_basic(&library)).unwrap())
} else {
Ok(to_value(s.to_serialized(&library)).unwrap())
}
}
None => Err(format!("Section `{}` not found.", path).into()),
}
}
}
#[derive(Debug)]
pub struct GetTaxonomy {
library: Arc<RwLock<Library>>,
taxonomies: HashMap<String, Taxonomy>,
}
impl GetTaxonomy {
pub fn new(all_taxonomies: Vec<Taxonomy>, library: Arc<RwLock<Library>>) -> Self {
let mut taxonomies = HashMap::new();
for taxo in all_taxonomies {
taxonomies.insert(taxo.kind.name.clone(), taxo);
}
Self { taxonomies, library }
}
}
impl TeraFn for GetTaxonomy {
fn call(&self, args: &HashMap<String, Value>) -> Result<Value> {
let kind = required_arg!(
String,
args.get("kind"),
"`get_taxonomy` requires a `kind` argument with a string value"
);
match self.taxonomies.get(&kind) {
Some(t) => Ok(to_value(t.to_serialized(&self.library.read().unwrap())).unwrap()),
None => {
Err(format!("`get_taxonomy` received an unknown taxonomy as kind: {}", kind).into())
}
}
}
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::{make_get_taxonomy, make_get_taxonomy_url, make_get_url, make_trans}; use super::{GetTaxonomy, GetTaxonomyUrl, GetUrl, Trans};
use std::collections::HashMap; use std::collections::HashMap;
use std::sync::{Arc, RwLock};
use tera::{to_value, Value}; use tera::{to_value, Function, Value};
use config::{Config, Taxonomy as TaxonomyConfig}; use config::{Config, Taxonomy as TaxonomyConfig};
use library::{Library, Taxonomy, TaxonomyItem}; use library::{Library, Taxonomy, TaxonomyItem};
@ -248,56 +298,67 @@ mod tests {
#[test] #[test]
fn can_add_cachebust_to_url() { fn can_add_cachebust_to_url() {
let config = Config::default(); let config = Config::default();
let static_fn = make_get_url(HashMap::new(), config); let static_fn = GetUrl::new(config, HashMap::new());
let mut args = HashMap::new(); let mut args = HashMap::new();
args.insert("path".to_string(), to_value("app.css").unwrap()); args.insert("path".to_string(), to_value("app.css").unwrap());
args.insert("cachebust".to_string(), to_value(true).unwrap()); args.insert("cachebust".to_string(), to_value(true).unwrap());
assert_eq!(static_fn(args).unwrap(), "http://a-website.com/app.css?t=1"); assert_eq!(static_fn.call(&args).unwrap(), "http://a-website.com/app.css?t=1");
} }
#[test] #[test]
fn can_add_trailing_slashes() { fn can_add_trailing_slashes() {
let config = Config::default(); let config = Config::default();
let static_fn = make_get_url(HashMap::new(), config); let static_fn = GetUrl::new(config, HashMap::new());
let mut args = HashMap::new(); let mut args = HashMap::new();
args.insert("path".to_string(), to_value("app.css").unwrap()); args.insert("path".to_string(), to_value("app.css").unwrap());
args.insert("trailing_slash".to_string(), to_value(true).unwrap()); args.insert("trailing_slash".to_string(), to_value(true).unwrap());
assert_eq!(static_fn(args).unwrap(), "http://a-website.com/app.css/"); assert_eq!(static_fn.call(&args).unwrap(), "http://a-website.com/app.css/");
} }
#[test] #[test]
fn can_add_slashes_and_cachebust() { fn can_add_slashes_and_cachebust() {
let config = Config::default(); let config = Config::default();
let static_fn = make_get_url(HashMap::new(), config); let static_fn = GetUrl::new(config, HashMap::new());
let mut args = HashMap::new(); let mut args = HashMap::new();
args.insert("path".to_string(), to_value("app.css").unwrap()); args.insert("path".to_string(), to_value("app.css").unwrap());
args.insert("trailing_slash".to_string(), to_value(true).unwrap()); args.insert("trailing_slash".to_string(), to_value(true).unwrap());
args.insert("cachebust".to_string(), to_value(true).unwrap()); args.insert("cachebust".to_string(), to_value(true).unwrap());
assert_eq!(static_fn(args).unwrap(), "http://a-website.com/app.css/?t=1"); assert_eq!(static_fn.call(&args).unwrap(), "http://a-website.com/app.css/?t=1");
} }
#[test] #[test]
fn can_link_to_some_static_file() { fn can_link_to_some_static_file() {
let config = Config::default(); let config = Config::default();
let static_fn = make_get_url(HashMap::new(), config); let static_fn = GetUrl::new(config, HashMap::new());
let mut args = HashMap::new(); let mut args = HashMap::new();
args.insert("path".to_string(), to_value("app.css").unwrap()); args.insert("path".to_string(), to_value("app.css").unwrap());
assert_eq!(static_fn(args).unwrap(), "http://a-website.com/app.css"); assert_eq!(static_fn.call(&args).unwrap(), "http://a-website.com/app.css");
} }
#[test] #[test]
fn can_get_taxonomy() { fn can_get_taxonomy() {
let taxo_config = TaxonomyConfig { name: "tags".to_string(), ..TaxonomyConfig::default() }; let config = Config::default();
let library = Library::new(0, 0); let taxo_config = TaxonomyConfig {
let tag = TaxonomyItem::new("Programming", "tags", &Config::default(), vec![], &library); name: "tags".to_string(),
lang: config.default_language.clone(),
..TaxonomyConfig::default()
};
let library = Arc::new(RwLock::new(Library::new(0, 0, false)));
let tag = TaxonomyItem::new(
"Programming",
&taxo_config,
&config,
vec![],
&library.read().unwrap(),
);
let tags = Taxonomy { kind: taxo_config, items: vec![tag] }; let tags = Taxonomy { kind: taxo_config, items: vec![tag] };
let taxonomies = vec![tags.clone()]; let taxonomies = vec![tags.clone()];
let static_fn = make_get_taxonomy(&taxonomies, &library); let static_fn = GetTaxonomy::new(taxonomies.clone(), library.clone());
// can find it correctly // can find it correctly
let mut args = HashMap::new(); let mut args = HashMap::new();
args.insert("kind".to_string(), to_value("tags").unwrap()); args.insert("kind".to_string(), to_value("tags").unwrap());
let res = static_fn(args).unwrap(); let res = static_fn.call(&args).unwrap();
let res_obj = res.as_object().unwrap(); let res_obj = res.as_object().unwrap();
assert_eq!(res_obj["kind"], to_value(tags.kind).unwrap()); assert_eq!(res_obj["kind"], to_value(tags.kind).unwrap());
assert_eq!(res_obj["items"].clone().as_array().unwrap().len(), 1); assert_eq!(res_obj["items"].clone().as_array().unwrap().len(), 1);
@ -321,31 +382,36 @@ mod tests {
// and errors if it can't find it // and errors if it can't find it
let mut args = HashMap::new(); let mut args = HashMap::new();
args.insert("kind".to_string(), to_value("something-else").unwrap()); args.insert("kind".to_string(), to_value("something-else").unwrap());
assert!(static_fn(args).is_err()); assert!(static_fn.call(&args).is_err());
} }
#[test] #[test]
fn can_get_taxonomy_url() { fn can_get_taxonomy_url() {
let taxo_config = TaxonomyConfig { name: "tags".to_string(), ..TaxonomyConfig::default() }; let config = Config::default();
let library = Library::new(0, 0); let taxo_config = TaxonomyConfig {
let tag = TaxonomyItem::new("Programming", "tags", &Config::default(), vec![], &library); name: "tags".to_string(),
lang: config.default_language.clone(),
..TaxonomyConfig::default()
};
let library = Library::new(0, 0, false);
let tag = TaxonomyItem::new("Programming", &taxo_config, &config, vec![], &library);
let tags = Taxonomy { kind: taxo_config, items: vec![tag] }; let tags = Taxonomy { kind: taxo_config, items: vec![tag] };
let taxonomies = vec![tags.clone()]; let taxonomies = vec![tags.clone()];
let static_fn = make_get_taxonomy_url(&taxonomies); let static_fn = GetTaxonomyUrl::new(&taxonomies);
// can find it correctly // can find it correctly
let mut args = HashMap::new(); let mut args = HashMap::new();
args.insert("kind".to_string(), to_value("tags").unwrap()); args.insert("kind".to_string(), to_value("tags").unwrap());
args.insert("name".to_string(), to_value("Programming").unwrap()); args.insert("name".to_string(), to_value("Programming").unwrap());
assert_eq!( assert_eq!(
static_fn(args).unwrap(), static_fn.call(&args).unwrap(),
to_value("http://a-website.com/tags/programming/").unwrap() to_value("http://a-website.com/tags/programming/").unwrap()
); );
// and errors if it can't find it // and errors if it can't find it
let mut args = HashMap::new(); let mut args = HashMap::new();
args.insert("kind".to_string(), to_value("tags").unwrap()); args.insert("kind".to_string(), to_value("tags").unwrap());
args.insert("name".to_string(), to_value("random").unwrap()); args.insert("name".to_string(), to_value("random").unwrap());
assert!(static_fn(args).is_err()); assert!(static_fn.call(&args).is_err());
} }
#[test] #[test]
@ -364,16 +430,16 @@ title = "A title"
"#; "#;
let config = Config::parse(trans_config).unwrap(); let config = Config::parse(trans_config).unwrap();
let static_fn = make_trans(config); let static_fn = Trans::new(config);
let mut args = HashMap::new(); let mut args = HashMap::new();
args.insert("key".to_string(), to_value("title").unwrap()); args.insert("key".to_string(), to_value("title").unwrap());
assert_eq!(static_fn(args.clone()).unwrap(), "Un titre"); assert_eq!(static_fn.call(&args).unwrap(), "Un titre");
args.insert("lang".to_string(), to_value("en").unwrap()); args.insert("lang".to_string(), to_value("en").unwrap());
assert_eq!(static_fn(args.clone()).unwrap(), "A title"); assert_eq!(static_fn.call(&args).unwrap(), "A title");
args.insert("lang".to_string(), to_value("fr").unwrap()); args.insert("lang".to_string(), to_value("fr").unwrap());
assert_eq!(static_fn(args.clone()).unwrap(), "Un titre"); assert_eq!(static_fn.call(&args).unwrap(), "Un titre");
} }
} }

View file

@ -25,21 +25,34 @@ pub mod global_fns;
use tera::{Context, Tera}; use tera::{Context, Tera};
use errors::{Result, ResultExt}; use errors::{Error, Result};
lazy_static! { lazy_static! {
pub static ref ZOLA_TERA: Tera = { pub static ref ZOLA_TERA: Tera = {
let mut tera = Tera::default(); let mut tera = Tera::default();
tera.add_raw_templates(vec![ tera.add_raw_templates(vec![
("404.html", include_str!("builtins/404.html")), ("__zola_builtins/404.html", include_str!("builtins/404.html")),
("rss.xml", include_str!("builtins/rss.xml")), ("__zola_builtins/rss.xml", include_str!("builtins/rss.xml")),
("sitemap.xml", include_str!("builtins/sitemap.xml")), ("__zola_builtins/sitemap.xml", include_str!("builtins/sitemap.xml")),
("robots.txt", include_str!("builtins/robots.txt")), ("__zola_builtins/robots.txt", include_str!("builtins/robots.txt")),
("anchor-link.html", include_str!("builtins/anchor-link.html")), (
("shortcodes/youtube.html", include_str!("builtins/shortcodes/youtube.html")), "__zola_builtins/split_sitemap_index.xml",
("shortcodes/vimeo.html", include_str!("builtins/shortcodes/vimeo.html")), include_str!("builtins/split_sitemap_index.xml"),
("shortcodes/gist.html", include_str!("builtins/shortcodes/gist.html")), ),
("shortcodes/streamable.html", include_str!("builtins/shortcodes/streamable.html")), ("__zola_builtins/anchor-link.html", include_str!("builtins/anchor-link.html")),
(
"__zola_builtins/shortcodes/youtube.html",
include_str!("builtins/shortcodes/youtube.html"),
),
(
"__zola_builtins/shortcodes/vimeo.html",
include_str!("builtins/shortcodes/vimeo.html"),
),
("__zola_builtins/shortcodes/gist.html", include_str!("builtins/shortcodes/gist.html")),
(
"__zola_builtins/shortcodes/streamable.html",
include_str!("builtins/shortcodes/streamable.html"),
),
("internal/alias.html", include_str!("builtins/internal/alias.html")), ("internal/alias.html", include_str!("builtins/internal/alias.html")),
]) ])
.unwrap(); .unwrap();
@ -56,6 +69,6 @@ pub fn render_redirect_template(url: &str, tera: &Tera) -> Result<String> {
let mut context = Context::new(); let mut context = Context::new();
context.insert("url", &url); context.insert("url", &url);
tera.render("internal/alias.html", &context) tera.render("internal/alias.html", context)
.chain_err(|| format!("Failed to render alias for '{}'", url)) .map_err(|e| Error::chain(format!("Failed to render alias for '{}'", url), e))
} }

View file

@ -5,7 +5,7 @@ authors = ["Vincent Prouillet <prouillet.vincent@gmail.com>"]
[dependencies] [dependencies]
errors = { path = "../errors" } errors = { path = "../errors" }
tera = "0.11" tera = "1.0.0-alpha.3"
unicode-segmentation = "1.2" unicode-segmentation = "1.2"
walkdir = "2" walkdir = "2"
toml = "0.4" toml = "0.4"

View file

@ -4,7 +4,7 @@ use std::path::{Path, PathBuf};
use std::time::SystemTime; use std::time::SystemTime;
use walkdir::WalkDir; use walkdir::WalkDir;
use errors::{Result, ResultExt}; use errors::{Error, Result};
pub fn is_path_in_directory(parent: &Path, path: &Path) -> Result<bool> { pub fn is_path_in_directory(parent: &Path, path: &Path) -> Result<bool> {
let canonical_path = path let canonical_path = path
@ -19,7 +19,8 @@ pub fn is_path_in_directory(parent: &Path, path: &Path) -> Result<bool> {
/// Create a file with the content given /// Create a file with the content given
pub fn create_file(path: &Path, content: &str) -> Result<()> { pub fn create_file(path: &Path, content: &str) -> Result<()> {
let mut file = File::create(&path)?; let mut file =
File::create(&path).map_err(|e| Error::chain(format!("Failed to create {:?}", path), e))?;
file.write_all(content.as_bytes())?; file.write_all(content.as_bytes())?;
Ok(()) Ok(())
} }
@ -36,8 +37,9 @@ pub fn ensure_directory_exists(path: &Path) -> Result<()> {
/// exists before creating it /// exists before creating it
pub fn create_directory(path: &Path) -> Result<()> { pub fn create_directory(path: &Path) -> Result<()> {
if !path.exists() { if !path.exists() {
create_dir_all(path) create_dir_all(path).map_err(|e| {
.chain_err(|| format!("Was not able to create folder {}", path.display()))?; Error::chain(format!("Was not able to create folder {}", path.display()), e)
})?;
} }
Ok(()) Ok(())
} }
@ -46,7 +48,7 @@ pub fn create_directory(path: &Path) -> Result<()> {
pub fn read_file(path: &Path) -> Result<String> { pub fn read_file(path: &Path) -> Result<String> {
let mut content = String::new(); let mut content = String::new();
File::open(path) File::open(path)
.chain_err(|| format!("Failed to open '{:?}'", path.display()))? .map_err(|e| Error::chain(format!("Failed to open '{:?}'", path.display()), e))?
.read_to_string(&mut content)?; .read_to_string(&mut content)?;
// Remove utf-8 BOM if any. // Remove utf-8 BOM if any.
@ -57,6 +59,19 @@ pub fn read_file(path: &Path) -> Result<String> {
Ok(content) Ok(content)
} }
/// Return the content of a file, with error handling added.
/// The default error message is overwritten by the message given.
/// That means it is allocation 2 strings, oh well
pub fn read_file_with_error(path: &Path, message: &str) -> Result<String> {
let res = read_file(&path);
if res.is_ok() {
return res;
}
let mut err = Error::msg(message);
err.source = res.unwrap_err().source;
Err(err)
}
/// Looks into the current folder for the path and see if there's anything that is not a .md /// Looks into the current folder for the path and see if there's anything that is not a .md
/// file. Those will be copied next to the rendered .html file /// file. Those will be copied next to the rendered .html file
pub fn find_related_assets(path: &Path) -> Vec<PathBuf> { pub fn find_related_assets(path: &Path) -> Vec<PathBuf> {

View file

@ -14,3 +14,4 @@ pub mod fs;
pub mod net; pub mod net;
pub mod site; pub mod site;
pub mod templates; pub mod templates;
pub mod vec;

View file

@ -11,7 +11,7 @@ macro_rules! render_default_tpl {
let mut context = Context::new(); let mut context = Context::new();
context.insert("filename", $filename); context.insert("filename", $filename);
context.insert("url", $url); context.insert("url", $url);
Tera::one_off(DEFAULT_TPL, &context, true).map_err(|e| e.into()) Tera::one_off(DEFAULT_TPL, context, true).map_err(|e| e.into())
}}; }};
} }
@ -22,15 +22,26 @@ macro_rules! render_default_tpl {
pub fn render_template( pub fn render_template(
name: &str, name: &str,
tera: &Tera, tera: &Tera,
context: &Context, context: Context,
theme: &Option<String>, theme: &Option<String>,
) -> Result<String> { ) -> Result<String> {
// check if it is in the templates
if tera.templates.contains_key(name) { if tera.templates.contains_key(name) {
return tera.render(name, context).map_err(|e| e.into()); return tera.render(name, context).map_err(|e| e.into());
} }
// check if it is part of a theme
if let Some(ref t) = *theme { if let Some(ref t) = *theme {
return tera.render(&format!("{}/templates/{}", t, name), context).map_err(|e| e.into()); let theme_template_name = format!("{}/templates/{}", t, name);
if tera.templates.contains_key(&theme_template_name) {
return tera.render(&theme_template_name, context).map_err(|e| e.into());
}
}
// check if it is part of ZOLA_TERA defaults
let default_name = format!("__zola_builtins/{}", name);
if tera.templates.contains_key(&default_name) {
return tera.render(&default_name, context).map_err(|e| e.into());
} }
// maybe it's a default one? // maybe it's a default one?

View file

@ -0,0 +1,44 @@
pub trait InsertMany {
type Element;
fn insert_many(&mut self, elem_to_insert: Vec<(usize, Self::Element)>);
}
impl<T> InsertMany for Vec<T> {
type Element = T;
/// Efficiently insert multiple element in their specified index.
/// The elements should sorted in ascending order by their index.
///
/// This is done in O(n) time.
fn insert_many(&mut self, elem_to_insert: Vec<(usize, T)>) {
let mut inserted = vec![];
let mut last_idx = 0;
for (idx, elem) in elem_to_insert.into_iter() {
let head_len = idx - last_idx;
inserted.extend(self.splice(0..head_len, std::iter::empty()));
inserted.push(elem);
last_idx = idx;
}
let len = self.len();
inserted.extend(self.drain(0..len));
*self = inserted;
}
}
#[cfg(test)]
mod test {
use super::InsertMany;
#[test]
fn insert_many_works() {
let mut v = vec![1, 2, 3, 4, 5];
v.insert_many(vec![(0, 0), (2, -1), (5, 6)]);
assert_eq!(v, &[0, 1, 2, -1, 3, 4, 5, 6]);
let mut v2 = vec![1, 2, 3, 4, 5];
v2.insert_many(vec![(0, 0), (2, -1)]);
assert_eq!(v2, &[0, 1, 2, -1, 3, 4, 5]);
}
}

View file

@ -0,0 +1 @@
.hello {}

View file

@ -0,0 +1,4 @@
Number,Title
1,Gutenberg
2,Printing
3,Typewriter,ExtraBadColumn
1 Number,Title
2 1,Gutenberg
3 2,Printing
4 3,Typewriter,ExtraBadColumn

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 324 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 357 KiB

View file

Before

Width:  |  Height:  |  Size: 47 KiB

After

Width:  |  Height:  |  Size: 47 KiB

View file

Before

Width:  |  Height:  |  Size: 192 KiB

After

Width:  |  Height:  |  Size: 192 KiB

View file

Before

Width:  |  Height:  |  Size: 204 KiB

After

Width:  |  Height:  |  Size: 204 KiB

View file

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

View file

Before

Width:  |  Height:  |  Size: 250 KiB

After

Width:  |  Height:  |  Size: 250 KiB

View file

@ -16,10 +16,22 @@ resize_image(path, width, height, op, quality)
- `path`: The path to the source image relative to the `content` directory in the [directory structure](./documentation/getting-started/directory-structure.md). - `path`: The path to the source image relative to the `content` directory in the [directory structure](./documentation/getting-started/directory-structure.md).
- `width` and `height`: The dimensions in pixels of the resized image. Usage depends on the `op` argument. - `width` and `height`: The dimensions in pixels of the resized image. Usage depends on the `op` argument.
- `op`: Resize operation. This can be one of five choices: `"scale"`, `"fit_width"`, `"fit_height"`, `"fit"`, or `"fill"`. - `op` (_optional_): Resize operation. This can be one of:
What each of these does is explained below. - `"scale"`
This argument is optional, default value is `"fill"`. - `"fit_width"`
- `quality`: JPEG quality of the resized image, in percents. Optional argument, default value is `75`. - `"fit_height"`
- `"fit"`
- `"fill"`
What each of these does is explained below. The default is `"fill"`.
- `format` (_optional_): Encoding format of the resized image. May be one of:
- `"auto"`
- `"jpg"`
- `"png"`
The default is `"auto"`, this means the format is chosen based on input image format.
JPEG is chosen for JPEGs and other lossy formats, while PNG is chosen for PNGs and other lossless formats.
- `quality` (_optional_): JPEG quality of the resized image, in percents. Only used when encoding JPEGs, default value is `75`.
### Image processing and return value ### Image processing and return value
@ -29,7 +41,7 @@ Zola performs image processing during the build process and places the resized i
static/processed_images/ static/processed_images/
``` ```
Resized images are JPEGs. Filename of each resized image is a hash of the function arguments, Filename of each resized image is a hash of the function arguments,
which means that once an image is resized in a certain way, it will be stored in the above directory and will not which means that once an image is resized in a certain way, it will be stored in the above directory and will not
need to be resized again during subsequent builds (unless the image itself, the dimensions, or other arguments are changed). need to be resized again during subsequent builds (unless the image itself, the dimensions, or other arguments are changed).
Therefore, if you have a large number of images, they will only need to be resized once. Therefore, if you have a large number of images, they will only need to be resized once.
@ -40,14 +52,14 @@ The function returns a full URL to the resized image.
The source for all examples is this 300 × 380 pixels image: The source for all examples is this 300 × 380 pixels image:
![gutenberg](gutenberg.jpg) ![zola](01-zola.png)
### **`"scale"`** ### **`"scale"`**
Simply scales the image to the specified dimensions (`width` & `height`) irrespective of the aspect ratio. Simply scales the image to the specified dimensions (`width` & `height`) irrespective of the aspect ratio.
`resize_image(..., width=150, height=150, op="scale")` `resize_image(..., width=150, height=150, op="scale")`
{{ resize_image(path="documentation/content/image-processing/gutenberg.jpg", width=150, height=150, op="scale") }} {{ resize_image(path="documentation/content/image-processing/01-zola.png", width=150, height=150, op="scale") }}
### **`"fit_width"`** ### **`"fit_width"`**
Resizes the image such that the resulting width is `width` and height is whatever will preserve the aspect ratio. Resizes the image such that the resulting width is `width` and height is whatever will preserve the aspect ratio.
@ -55,7 +67,7 @@ The source for all examples is this 300 × 380 pixels image:
`resize_image(..., width=100, op="fit_width")` `resize_image(..., width=100, op="fit_width")`
{{ resize_image(path="documentation/content/image-processing/gutenberg.jpg", width=100, height=0, op="fit_width") }} {{ resize_image(path="documentation/content/image-processing/01-zola.png", width=100, height=0, op="fit_width") }}
### **`"fit_height"`** ### **`"fit_height"`**
Resizes the image such that the resulting height is `height` and width is whatever will preserve the aspect ratio. Resizes the image such that the resulting height is `height` and width is whatever will preserve the aspect ratio.
@ -63,7 +75,7 @@ The source for all examples is this 300 × 380 pixels image:
`resize_image(..., height=150, op="fit_height")` `resize_image(..., height=150, op="fit_height")`
{{ resize_image(path="documentation/content/image-processing/gutenberg.jpg", width=0, height=150, op="fit_height") }} {{ resize_image(path="documentation/content/image-processing/01-zola.png", width=0, height=150, op="fit_height") }}
### **`"fit"`** ### **`"fit"`**
Like `"fit_width"` and `"fit_height"` combined. Like `"fit_width"` and `"fit_height"` combined.
@ -72,7 +84,7 @@ The source for all examples is this 300 × 380 pixels image:
`resize_image(..., width=150, height=150, op="fit")` `resize_image(..., width=150, height=150, op="fit")`
{{ resize_image(path="documentation/content/image-processing/gutenberg.jpg", width=150, height=150, op="fit") }} {{ resize_image(path="documentation/content/image-processing/01-zola.png", width=150, height=150, op="fit") }}
### **`"fill"`** ### **`"fill"`**
This is the default operation. It takes the image's center part with the same aspect ratio as the `width` & `height` given and resizes that This is the default operation. It takes the image's center part with the same aspect ratio as the `width` & `height` given and resizes that
@ -80,7 +92,7 @@ The source for all examples is this 300 × 380 pixels image:
`resize_image(..., width=150, height=150, op="fill")` `resize_image(..., width=150, height=150, op="fill")`
{{ resize_image(path="documentation/content/image-processing/gutenberg.jpg", width=150, height=150, op="fill") }} {{ resize_image(path="documentation/content/image-processing/01-zola.png", width=150, height=150, op="fill") }}
## Using `resize_image` in markdown via shortcodes ## Using `resize_image` in markdown via shortcodes
@ -96,11 +108,11 @@ The examples above were generated using a shortcode file named `resize_image.htm
## Creating picture galleries ## Creating picture galleries
The `resize_image()` can be used multiple times and/or in loops as it is designed to handle this efficiently. The `resize_image()` can be used multiple times and/or in loops. It is designed to handle this efficiently.
This can be used along with `assets` [page metadata](./documentation/templates/pages-sections.md) to create picture galleries. This can be used along with `assets` [page metadata](./documentation/templates/pages-sections.md) to create picture galleries.
The `assets` variable holds paths to all assets in the directory of a page with resources The `assets` variable holds paths to all assets in the directory of a page with resources
(see [Assets colocation](./documentation/content/overview.md#assets-colocation)): if you have files other than images you (see [assets colocation](./documentation/content/overview.md#assets-colocation)): if you have files other than images you
will need to filter them out in the loop first like in the example below. will need to filter them out in the loop first like in the example below.
This can be used in shortcodes. For example, we can create a very simple html-only clickable This can be used in shortcodes. For example, we can create a very simple html-only clickable
@ -108,7 +120,7 @@ picture gallery with the following shortcode named `gallery.html`:
```jinja2 ```jinja2
{% for asset in page.assets %} {% for asset in page.assets %}
{% if asset is ending_with(".jpg") %} {% if asset is matching("[.](jpg|png)$") %}
<a href="{{ get_url(path=asset) }}"> <a href="{{ get_url(path=asset) }}">
<img src="{{ resize_image(path=asset, width=240, height=180, op="fill") }}" /> <img src="{{ resize_image(path=asset, width=240, height=180, op="fill") }}" />
</a> </a>
@ -117,7 +129,8 @@ picture gallery with the following shortcode named `gallery.html`:
{% endfor %} {% endfor %}
``` ```
As you can notice, we didn't specify an `op` argument, which means it'll default to `"fill"`. Similarly, the JPEG quality will default to `75`. As you can notice, we didn't specify an `op` argument, which means it'll default to `"fill"`. Similarly, the format will default to
`"auto"` (choosing PNG or JPEG as appropriate) and the JPEG quality will default to `75`.
To call it from a markdown file, simply do: To call it from a markdown file, simply do:
@ -130,5 +143,5 @@ Here is the result:
{{ gallery() }} {{ gallery() }}
<small> <small>
Image attribution: example-01: Willi Heidelbach, example-02: Daniel Ullrich, others: public domain. Image attribution: Public domain, except: _06-example.jpg_: Willi Heidelbach, _07-example.jpg_: Daniel Ullrich.
</small> </small>

View file

@ -0,0 +1,37 @@
+++
title = "Multilingual sites"
weight = 130
+++
Zola supports having a site in multiple languages.
## Configuration
To get started, you will need to add the languages you want to support
to your `config.toml`. For example:
```toml
languages = [
{code = "fr", rss = true}, # there will be a RSS feed for French content
{code = "it"}, # there won't be a RSS feed for Italian content
]
```
If you want to use per-language taxonomies, ensure you set the `lang` field in their
configuration.
## Content
Once the languages are added in, you can start to translate your content. Zola
uses the filename to detect the language:
- `content/an-article.md`: this will be the default language
- `content/an-article.fr.md`: this will be in French
If the language code in the filename does not correspond to one of the languages configured,
an error will be shown.
If your default language has an `_index.md` in a directory, you will need to add a `_index.{code}.md`
file with the desired front-matter options as there is no language fallback.
## Output
Zola outputs the translated content with a base URL of `{base_url}/{code}/`.
The only exception to that is if you are setting a translated page `path` directly in the front-matter.

View file

@ -102,6 +102,6 @@ where you want the summary to end and the content up to that point will be also
available separately in the available separately in the
[template](./documentation/templates/pages-sections.md#page-variables). [template](./documentation/templates/pages-sections.md#page-variables).
An anchor link to this position named `continue-reading` is created so you can link An anchor link to this position named `continue-reading` is created, wrapped in a paragraph
directly to it if needed for example: with a `zola-continue-reading` id, so you can link directly to it if needed for example:
`<a href="{{ page.permalink }}#continue-reading">Continue Reading</a>` `<a href="{{ page.permalink }}#continue-reading">Continue Reading</a>`

View file

@ -36,6 +36,10 @@ That's it, Zola will now recognise this template as a shortcode named `youtube`
The markdown renderer will wrap an inline HTML node like `<a>` or `<span>` into a paragraph. If you want to disable that, The markdown renderer will wrap an inline HTML node like `<a>` or `<span>` into a paragraph. If you want to disable that,
simply wrap your shortcode in a `div`. simply wrap your shortcode in a `div`.
Shortcodes are rendered before parsing the markdown so it doesn't have access to the table of contents. Because of that,
you also cannot use the `get_page`/`get_section`/`get_taxonomy` global function. It might work while running `zola serve` because
it has been loaded but it will fail during `zola build`.
## Using shortcodes ## Using shortcodes
There are two kinds of shortcodes: There are two kinds of shortcodes:

View file

@ -105,6 +105,7 @@ Here is a full list of the supported languages and the short names you can use:
- Textile -> ["textile"] - Textile -> ["textile"]
- XML -> ["xml", "xsd", "xslt", "tld", "dtml", "rss", "opml", "svg"] - XML -> ["xml", "xsd", "xslt", "tld", "dtml", "rss", "opml", "svg"]
- YAML -> ["yaml", "yml", "sublime-syntax"] - YAML -> ["yaml", "yml", "sublime-syntax"]
- PowerShell -> ["ps1", "psm1", "psd1"]
- SWI-Prolog -> ["pro"] - SWI-Prolog -> ["pro"]
- Reason -> ["re", "rei"] - Reason -> ["re", "rei"]
- CMake C Header -> ["h.in"] - CMake C Header -> ["h.in"]

View file

@ -7,13 +7,14 @@ Zola has built-in support for taxonomies.
The first step is to define the taxonomies in your [config.toml](./documentation/getting-started/configuration.md). The first step is to define the taxonomies in your [config.toml](./documentation/getting-started/configuration.md).
A taxonomy has 4 variables: A taxonomy has 5 variables:
- `name`: a required string that will be used in the URLs, usually the plural version (i.e. tags, categories etc) - `name`: a required string that will be used in the URLs, usually the plural version (i.e. tags, categories etc)
- `paginate_by`: if this is set to a number, each term page will be paginated by this much. - `paginate_by`: if this is set to a number, each term page will be paginated by this much.
- `paginate_path`: if set, will be the path used by paginated page and the page number will be appended after it. - `paginate_path`: if set, will be the path used by paginated page and the page number will be appended after it.
For example the default would be page/1 For example the default would be page/1
- `rss`: if set to `true`, a RSS feed will be generated for each individual term. - `rss`: if set to `true`, a RSS feed will be generated for each individual term.
- `lang`: only set this if you are making a multilingual site and want to indicate which language this taxonomy is for
Once this is done, you can then set taxonomies in your content and Zola will pick Once this is done, you can then set taxonomies in your content and Zola will pick
them up: them up:

View file

@ -7,6 +7,13 @@ By default, GitHub Pages uses Jekyll (A ruby based static site generator),
but you can use whatever you want provided you have an `index.html` file in the root of a branch called `gh-pages`. but you can use whatever you want provided you have an `index.html` file in the root of a branch called `gh-pages`.
That branch name can also be manually changed in the settings of a repository. That branch name can also be manually changed in the settings of a repository.
We can use any CI server to build and deploy our site. For example:
* [Github Actions](https://github.com/shalzz/zola-deploy-action)
* [Travis CI](#travis-ci)
## Travis CI
We are going to use [TravisCI](https://travis-ci.org) to automatically publish the site. If you are not using Travis already, We are going to use [TravisCI](https://travis-ci.org) to automatically publish the site. If you are not using Travis already,
you will need to login with the GitHub OAuth and activate Travis for the repository. you will need to login with the GitHub OAuth and activate Travis for the repository.
Don't forget to also check if your repository allows GitHub Pages in its settings. Don't forget to also check if your repository allows GitHub Pages in its settings.

View file

@ -21,7 +21,7 @@ zola.
## build ## build
This will build the whole site in the `public` directory. This will build the whole site in the `public` directory after deleting it.
```bash ```bash
$ zola build $ zola build
@ -36,6 +36,14 @@ $ zola build --base-url $DEPLOY_URL
This is useful for example when you want to deploy previews of a site to a dynamic URL, such as Netlify This is useful for example when you want to deploy previews of a site to a dynamic URL, such as Netlify
deploy previews. deploy previews.
You can override the default `base_path` by passing a new directory to the `base-path` flag. If no `base-path` flag
is provided, zola defaults to your current working directory. This is useful if your zola project is located in
a different directory from where you're executing zola from.
```bash
$ zola build --base-path /path/to/zola/site
```
You can override the default output directory 'public' by passing a other value to the `output-dir` flag. You can override the default output directory 'public' by passing a other value to the `output-dir` flag.
```bash ```bash
@ -58,6 +66,8 @@ if you are running zola in a Docker container.
In the event you don't want zola to run a local webserver, you can use the `--watch-only` flag. In the event you don't want zola to run a local webserver, you can use the `--watch-only` flag.
Before starting, it will delete the public directory to ensure it starts from a clean slate.
```bash ```bash
$ zola serve $ zola serve
$ zola serve --port 2000 $ zola serve --port 2000
@ -65,6 +75,7 @@ $ zola serve --interface 0.0.0.0
$ zola serve --interface 0.0.0.0 --port 2000 $ zola serve --interface 0.0.0.0 --port 2000
$ zola serve --interface 0.0.0.0 --base-url 127.0.0.1 $ zola serve --interface 0.0.0.0 --base-url 127.0.0.1
$ zola serve --interface 0.0.0.0 --port 2000 --output-dir www/public $ zola serve --interface 0.0.0.0 --port 2000 --output-dir www/public
$ zola serve --interface 0.0.0.0 --port 2000 --base-path mysite/ --output-dir mysite/www/public
$ zola serve --watch-only $ zola serve --watch-only
``` ```

View file

@ -21,7 +21,7 @@ base_url = "mywebsite.com"
# Used in RSS by default # Used in RSS by default
title = "" title = ""
description = "" description = ""
# the default language, used in RSS and coming i18n # The default language, used in RSS
default_language = "en" default_language = "en"
# Theme name to use # Theme name to use
@ -51,6 +51,15 @@ generate_rss = false
# #
taxonomies = [] taxonomies = []
# The additional languages for that site
# Example:
# languages = [
# {code = "fr", rss = true}, # there will be a RSS feed for French content
# {code = "it"}, # there won't be a RSS feed for Italian content
# ]
#
languages = []
# Whether to compile the Sass files found in the `sass` directory # Whether to compile the Sass files found in the `sass` directory
compile_sass = false compile_sass = false
@ -99,6 +108,7 @@ Zola currently has the following highlight themes available:
- [classic-modified](https://tmtheme-editor.herokuapp.com/#!/editor/theme/Classic%20Modified) - [classic-modified](https://tmtheme-editor.herokuapp.com/#!/editor/theme/Classic%20Modified)
- [demain](https://tmtheme-editor.herokuapp.com/#!/editor/theme/Demain) - [demain](https://tmtheme-editor.herokuapp.com/#!/editor/theme/Demain)
- [dimmed-fluid](https://tmtheme-editor.herokuapp.com/#!/editor/theme/Dimmed%20Fluid) - [dimmed-fluid](https://tmtheme-editor.herokuapp.com/#!/editor/theme/Dimmed%20Fluid)
- [dracula](https://draculatheme.com/)
- [gray-matter-dark](https://tmtheme-editor.herokuapp.com/#!/editor/theme/Gray%20Matter%20Dark) - [gray-matter-dark](https://tmtheme-editor.herokuapp.com/#!/editor/theme/Gray%20Matter%20Dark)
- [gruvbox-dark](https://github.com/morhetz/gruvbox) - [gruvbox-dark](https://github.com/morhetz/gruvbox)
- [gruvbox-light](https://github.com/morhetz/gruvbox) - [gruvbox-light](https://github.com/morhetz/gruvbox)

View file

@ -45,7 +45,7 @@ $ choco install zola
``` ```
## From source ## From source
To build it from source, you will need to have Git, [Rust (at least 1.30) and Cargo](https://www.rust-lang.org/) To build it from source, you will need to have Git, [Rust (at least 1.31) and Cargo](https://www.rust-lang.org/)
installed. You will also need additional dependencies to compile [libsass](https://github.com/sass/libsass): installed. You will also need additional dependencies to compile [libsass](https://github.com/sass/libsass):
- OSX, Linux and other Unix: `make` (`gmake` on BSDs), `g++`, `libssl-dev` - OSX, Linux and other Unix: `make` (`gmake` on BSDs), `g++`, `libssl-dev`

View file

@ -18,6 +18,7 @@ A few variables are available on all templates minus RSS and sitemap:
- `config`: the [configuration](./documentation/getting-started/configuration.md) without any modifications - `config`: the [configuration](./documentation/getting-started/configuration.md) without any modifications
- `current_path`: the path (full URL without the `base_url`) of the current page, never starting with a `/` - `current_path`: the path (full URL without the `base_url`) of the current page, never starting with a `/`
- `current_url`: the full URL for that page - `current_url`: the full URL for that page
- `lang`: the language for that page, `null` if the page/section doesn't have a language set
## Standard Templates ## Standard Templates
By default, Zola will look for three templates: `index.html`, which is applied By default, Zola will look for three templates: `index.html`, which is applied
@ -146,33 +147,36 @@ Gets the whole taxonomy of a specific kind.
### `load_data` ### `load_data`
Loads data from a file or URL. Supported file types include *toml*, *json* and *csv*. Loads data from a file or URL. Supported file types include *toml*, *json* and *csv*.
Any other file type will be loaded as plain text.
The `path` argument specifies the path to the data file relative to your content directory. The `path` argument specifies the path to the data file relative to your base directory, where your `config.toml` is.
As a security precaution, If this file is outside of the main site directory, your site will fail to build. As a security precaution, If this file is outside of the main site directory, your site will fail to build.
```jinja2 ```jinja2
{% set data = load_data(path="blog/story/data.toml") %} {% set data = load_data(path="content/blog/story/data.toml") %}
``` ```
The optional `format` argument allows you to specify and override which data type is contained The optional `format` argument allows you to specify and override which data type is contained
within the file specified in the `path` argument. Valid entries are *"toml"*, *"json"*, *"csv"* within the file specified in the `path` argument. Valid entries are `toml`, `json`, `csv`
or *"plain"*. If the `format` argument isn't specified, then the paths extension is used. or `plain`. If the `format` argument isn't specified, then the paths extension is used.
```jinja2 ```jinja2
{% set data = load_data(path="blog/story/data.txt", format="json") %} {% set data = load_data(path="content/blog/story/data.txt", format="json") %}
``` ```
Use the `plain` format for when your file has a toml/json/csv extension but you want to load it as plain text.
For *toml* and *json* the data is loaded into a structure matching the original data file, For *toml* and *json* the data is loaded into a structure matching the original data file,
however for *csv* there is no native notion of such a structure. Instead the data is seperated however for *csv* there is no native notion of such a structure. Instead the data is separated
into a data structure containing *headers* and *records*. See the example below to see into a data structure containing *headers* and *records*. See the example below to see
how this works. how this works.
In the template: In the template:
```jinja2 ```jinja2
{% set data = load_data(path="blog/story/data.csv") %} {% set data = load_data(path="content/blog/story/data.csv") %}
``` ```
In the *blog/story/data.csv* file: In the *content/blog/story/data.csv* file:
```csv ```csv
Number, Title Number, Title
1,Gutenberg 1,Gutenberg
@ -210,7 +214,9 @@ By default, the response body will be returned with no parsing. This can be chan
#### Data Caching #### Data Caching
Data file loading and remote requests are cached in memory during build, so multiple requests aren't made to the same endpoint. URLs are cached based on the URL, and data files are cached based on the files modified time. The format is also taken into account when caching, so a request will be sent twice if it's loaded with 2 different formats. Data file loading and remote requests are cached in memory during build, so multiple requests aren't made to the same endpoint.
URLs are cached based on the URL, and data files are cached based on the files modified time.
The format is also taken into account when caching, so a request will be sent twice if it's loaded with 2 different formats.
### `trans` ### `trans`
Gets the translation of the given `key`, for the `default_language` or the `lang`uage given Gets the translation of the given `key`, for the `default_language` or the `lang`uage given

View file

@ -32,13 +32,13 @@ word_count: Number;
// Based on https://help.medium.com/hc/en-us/articles/214991667-Read-time // Based on https://help.medium.com/hc/en-us/articles/214991667-Read-time
reading_time: Number; reading_time: Number;
// `earlier` and `later` are only populated if the section variable `sort_by` is set to `date` // `earlier` and `later` are only populated if the section variable `sort_by` is set to `date`
// and only set when rendering the page itself
earlier: Page?; earlier: Page?;
later: Page?; later: Page?;
// `heavier` and `lighter` are only populated if the section variable `sort_by` is set to `weight` // `heavier` and `lighter` are only populated if the section variable `sort_by` is set to `weight`
// and only set when rendering the page itself
heavier: Page?; heavier: Page?;
lighter: Page?; lighter: Page?;
// See the Table of contents section below for more details
toc: Array<Header>;
// Year/month/day is only set if the page has a date and month/day are 1-indexed // Year/month/day is only set if the page has a date and month/day are 1-indexed
year: Number?; year: Number?;
month: Number?; month: Number?;
@ -51,6 +51,10 @@ assets: Array<String>;
ancestors: Array<String>; ancestors: Array<String>;
// The relative path from the `content` directory to the markdown file // The relative path from the `content` directory to the markdown file
relative_path: String; relative_path: String;
// The language for the page if there is one. Default to the config `default_language`
lang: String;
// Information about all the available languages for that content
translations: Array<TranslatedContent>;
``` ```
## Section variables ## Section variables
@ -66,8 +70,6 @@ with the following fields:
content: String; content: String;
title: String?; title: String?;
description: String?; description: String?;
date: String?;
slug: String;
path: String; path: String;
// the path, split on '/' // the path, split on '/'
components: Array<String>; components: Array<String>;
@ -83,8 +85,6 @@ subsections: Array<String>;
word_count: Number; word_count: Number;
// Based on https://help.medium.com/hc/en-us/articles/214991667-Read-time // Based on https://help.medium.com/hc/en-us/articles/214991667-Read-time
reading_time: Number; reading_time: Number;
// See the Table of contents section below for more details
toc: Array<Header>;
// Paths of colocated assets, relative to the content directory // Paths of colocated assets, relative to the content directory
assets: Array<String>; assets: Array<String>;
// The relative paths of the parent sections until the index onef for use with the `get_section` Tera function // The relative paths of the parent sections until the index onef for use with the `get_section` Tera function
@ -93,11 +93,15 @@ assets: Array<String>;
ancestors: Array<String>; ancestors: Array<String>;
// The relative path from the `content` directory to the markdown file // The relative path from the `content` directory to the markdown file
relative_path: String; relative_path: String;
// The language for the section if there is one. Default to the config `default_language`
lang: String;
// Information about all the available languages for that content
translations: Array<TranslatedContent>;
``` ```
## Table of contents ## Table of contents
Both page and section have a `toc` field which corresponds to an array of `Header`. Both page and section templates have a `toc` variable which corresponds to an array of `Header`.
A `Header` has the following fields: A `Header` has the following fields:
```ts ```ts
@ -112,3 +116,19 @@ permalink: String;
// All lower level headers below this header // All lower level headers below this header
children: Array<Header>; children: Array<Header>;
``` ```
## Translated content
Both page and section have a `translations` field which corresponds to an array of `TranslatedContent`. If your site is not using multiple languages,
this will always be an empty array.
A `TranslatedContent` has the following fields:
```ts
// The language code for that content, empty if it is the default language
lang: String?;
// The title of that content if there is one
title: String?;
// A permalink to that content
permalink: String;
```

View file

@ -6,20 +6,28 @@ weight = 60
Zola will look for a `sitemap.xml` file in the `templates` directory or Zola will look for a `sitemap.xml` file in the `templates` directory or
use the built-in one. use the built-in one.
If your site has more than 30 000 pages, it will automatically split
the links into multiple sitemaps as recommended by [Google](https://support.google.com/webmasters/answer/183668?hl=en):
The sitemap template gets four variables in addition of the config: > All formats limit a single sitemap to 50MB (uncompressed) and 50,000 URLs.
> If you have a larger file or more URLs, you will have to break your list into multiple sitemaps.
> You can optionally create a sitemap index file (a file that points to a list of sitemaps) and submit that single index file to Google.
- `pages`: all pages of the site In such a case, Zola will use a template called `split_sitemap_index.xml` to render the index sitemap.
- `sections`: all sections of the site, including an index section
- `tags`: links the tags page and individual tag page, empty if no tags
- `categories`: links the categories page and individual category page, empty if no categories
As the sitemap only requires a link and an optional date for the `lastmod` field,
all the variables above are arrays of `SitemapEntry` with the following type: The `sitemap.xml` template gets a single variable:
- `entries`: all pages of the site, as a list of `SitemapEntry`
A `SitemapEntry` has the following fields:
```ts ```ts
permalink: String; permalink: String;
date: String?; date: String?;
extra: Hashmap<String, Any>?;
``` ```
All `SitemapEntry` are sorted in each variable by their permalink. The `split_sitemap_index.xml` also gets a single variable:
- `sitemaps`: a list of permalinks to the sitemaps

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.5 KiB

Some files were not shown because too many files have changed in this diff Show more