Digital Histology & the Long Haul

Digital Histology homescreen screenshot.

Normally we finish our projects in anywhere from a few hours to a few weeks. Digital Histology has been the exception to that rule. I can see a reference to the site going back to Nov. of 2016! That doesn’t mean we’ve worked on this site continuously for years. The gaps have been frequent and long. OER grants have been written and won. Presentations have been made. Work has ebbed and flowed as the massive amount of content has been entered. There are more than 1500 pages1 and over 5GBs of images. It’s a large site. A ton of work has gone into its construction, new goals have developed, and just about all of it is a little strange. 2 I figured I’d better document some of this before I forgot all of it.

Made with Macromedia logo.

The History

I don’t recall all the details but essentially long ago in a Macromedia Authorware galaxy far, far away a digital histology program was constructed. Time passed. Acorns grew into trees. WINE was now required to launch the digital histology program. The screen was a tiny 752×613. It only ran on desktops. Updating it was nearly impossible. Things were not good. After much wandering we found one another and endeavored to put this work online for the betterment of humankind.

Having a previous project did some good things for us — the content was mostly created and there was experience working with digital projects. The previous construction patterns in Macromedia were very different from the way building on the web works today. We did quite a bit of work to parallel the previous interactions. I don’t know if that’s how I would have done it had we started from scratch. This was also the first time I’d built anything substantial with ACF.

The stacked tiering of the histology menu.

The Menu/Main Page

The menu has gone through a few iterations as we came up with different ways to deal with just how many pages were involved and how to deal with a really odd linking pattern. I’ll try to draw the menu pattern below. We had to figure out which pages had no grandchildren and for each one of those we would keep the title but link directly to the first child. Pages with no children would not be shown at all. Not super weird I guess but not normal.

Histology menu pattern - parent to child to terminal child

To deal with the scale, Jeff wrote a slick little plugin to dump the pages data into JSON. That saves us a lot of time especially given the way that WordPress recursively builds menus. Jeff also had the layout generated in Vue.

/* Plugin Name: Menu Cache Plugin
 * Version: 1.0.2
 * Author: Jeff Everhart
 * Author URI: http://altlab.vcu.edu/team-members/jeff-everhart/
 * License: GPL version 2 or later - http://www.gnu.org/licenses/old-licenses/gpl-2.0.html
 * Description: This is a helper plugin for developing complex menus. On post save, we cache all of the data for pages in a JSON file to be used on the front end.
 *
*/

function create_menu($post_id) {
    global $wpdb;
    $results = $wpdb->get_results( "SELECT `ID`, `post_title`, `post_parent`, `post_name`, `guid` FROM {$wpdb->prefix}posts WHERE post_type='page' and post_status = 'publish' ", ARRAY_A );
    file_put_contents(plugin_dir_path(__FILE__) . 'results.json', json_encode($results));

}
add_action('save_post', 'create_menu');

Then something happened3 that required me to deal with a chunk of issues. I failed to do it in Vue enough times that I got mad and built it again in jQuery.4 You can see Jeff’s pretty code with consts and lets below.

function hasAnyGrandchildren (tree){
    let newTree = []
            let length = tree.length

            for (let i = 0; i < length; i++) {
                const node = tree[i]
                let hasGrandchildren = false
                if (node.children){
                  let children = this.hasAnyGrandchildren(node.children)
                  children.forEach(child => {
                        if (child.children && child.children.length > 0) {
                            hasGrandchildren = true
                        }
                    })
                }
                node.hasGrandchildren = hasGrandchildren
                newTree.push(node)

            }
            return newTree
}

function createTree () {
    fetch( histology_directory.data_directory+'/results.json' ) //histology_directory.data_directory+'/results.json'
            .then(result => {
                result.json().then(json => {

                    function parseTree(nodes, parentID){
                        let tree = []
                        let length = nodes.length
                        for (let i = 0; i < length; i++){
                            let node = nodes[i]
                            if(node.post_parent == parentID){
                                let children = parseTree(nodes, node.ID)

                                if (children.length > 0) {
                                    node.children = children
                                }
                                tree.push(node)
                            }
                        }

                        return tree
                    }

                    const completeTree = parseTree(json, "0")
                    const annotatedTree = this.hasAnyGrandchildren(completeTree)
                    this.tree = annotatedTree
                    //console.log(annotatedTree)
                    publishTree(annotatedTree)
                    return annotatedTree
                })
            })
        }    

And then I come in with this mess of stuff. You can see most of it explained in the comments. It added things like arrows for pages that would expand in the menu (rather than taking you to pages), it set the URL so you could link to expanded menu items via a URL, it removed the additional descriptions from overview pages, etc.


//DOING MOST OF THE CONSTRUCTION WORK via concat bc I am lazy
function publishTree(tree){
    var menu = ''
    tree.forEach(function(item){
    //console.log(item)   
      if ( item.hasGrandchildren === true) {
            menu = menu.concat('<li><h2>' + item.post_title) + '</h2>'
            menu = menu.concat('<div class="cell-main-index">')
              menu = menu.concat(makeLimb(item.children, 'childbearing top'))
        menu.concat('</li>')  
        menu = menu.concat('</div>')
        limbMenu = ''
        }
        
    })
     jQuery(menu).appendTo( "#app ul" );
     stunLinks()
     checkUrl()
     specialAddition()
}

var limbMenu = ''

//OOOOOH RECURSION for limb construction 
function makeLimb(data, type){
    limbMenu = limbMenu.concat('<ul>')
    data.forEach(function(item){
            if (item.hasGrandchildren === true){
                limbMenu = limbMenu.concat('<li><a id="menu_' + item.ID + '" class="' + type +'" href="' + item.guid + '">' + overviewClean(item.post_title) + ifParent(item.hasGrandchildren) + '</a>')
                makeLimb(item.children, "childbearing")
                limbMenu = limbMenu.concat('</li>')
            } if (item.children && !item.hasGrandchildren) {
                limbMenu = limbMenu.concat('<li><a class="live" href="' + item.children[0].guid + '">' + overviewClean(item.post_title) + '</a>')
                makeLimb(item.children, "live")
            } //this is super ugly but this appears to be the only item that violates the pattern
            if (item.post_title == "Overview of connective tissues"){
              //console.log(item.post_title + ' foo')
              limbMenu = limbMenu.concat('<li><a class="live" href="' + item.guid + '">' + overviewClean(item.post_title) + '</a>')             
            }
    })
    limbMenu = limbMenu.concat('</ul>') 
    return limbMenu
}


//add arrow to indicate menu item has children to display vs taking you to the page URL 
function ifParent(kids){
    if (kids === true){
        return '<i class="fa fa-arrow-right"></i>'
    } else {
        return ""
    }
}


createTree();


//THIS CAME UP BC PAGES WERE CALLED OVERVIEW OF BLAH BLAH BLAH and they wanted to remove the blah blah blah part
function overviewClean(title){
  var regex = /overview/i;
  var found = title.match(regex)
  if (found === null){
    return title
  } else {
    return title.substring(0, 8)
  }
}


//MAKE LINKS NOT BEHAVE LIKE LINKS instead add/remove classes
function stunLinks(){
    jQuery(".childbearing").click(function (e) {
      e.preventDefault(); 
      jQuery('.active').removeClass('active');
      jQuery(this).parent().children('ul').toggleClass('active');
      jQuery(this).parentsUntil('.cell-main-index').addClass('active');
      updateURL(jQuery(this)["0"].id)
    });
}


//GET THE URL PATTERN TO EXPOSE MENU LEVELS via parameters
function checkUrl(){
  var id = getQueryVariable("menu");
  if (id){
     jQuery('#'+id).parent().children('ul').addClass('active');
     jQuery('#'+id).parents().addClass('active');
  }
}


//from https://css-tricks.com/snippets/javascript/get-url-variables/
function getQueryVariable(variable)
{
       var query = window.location.search.substring(1);
       var vars = query.split("&");
       for (var i=0;i<vars.length;i++) {
               var pair = vars[i].split("=");
               if(pair[0] == variable){return pair[1];}
       }
       return(false);
}

//THIS WAS DONE BC *ONE* PAGE DIDN'T FIT THE PATTERN 
function specialAddition(){
  if (document.getElementById('menu_325')){
    var exocrine = document.getElementById('menu_325')
    var parent = exocrine.parentElement.parentElement

    var node = document.createElement('li');                 // Create a <li> node
    var a = document.createElement('a'); // Create a text node
    a.setAttribute('href', 'https://rampages.us/histology/?menu=menu_212');
    a.textContent = 'Endocrine ';
    node.appendChild(a);                              // Append the text to <li>
    parent.appendChild(node); 
    a.innerHTML = a.innerHTML + '<i class="fa fa-arrow-right"></i>'
  }

}

//make url change per menu change so it's easier to share links etc.
//from https://eureka.ykyuen.info/2015/04/08/javascript-add-query-parameter-to-current-url-without-reload/
function updateURL(id) {
      if (history.pushState) {
          var newurl = window.location.protocol + "//" + window.location.host + window.location.pathname + '?menu='+id;
          window.history.pushState({path:newurl},'',newurl);
      }
    }

Expanded menu for the histology page.
We now have something that’s pretty decent on desktops but I really need to rethink it fundamentally for mobile. One the histology faculty side there is a dislike that bottom tiering menu items like Ear->Inner Ear break the “frame” as they expand downward. In the previous application, I think they just hand assigned how things would layout. That’s relatively easy when you only have one window size and don’t allow people to alter things in a fluid manner. This kind of thing is something that can be dealt with but on my end I have to weigh the effort to do it across a variety of screen sizes vs the impact it’s likely to have on the average user on the site. Right now I can’t justify putting in the extra time.

Recently there was the desire to add multiple background images for the home page. I added an ACF repeater field for images and used a PHP function to randomize between the elements added there.

function randomHomeBackground(){
    $rows = get_field('background' ); // get all the rows
    $rand_row = $rows[ array_rand( $rows ) ]; // get a random row
    $rand_row_image = $rand_row['background_image' ]; // get the sub field value 

    return $rand_row_image;
}

That gets used in the template like so.

<div id="content" class="clearfix row" style="background-image: url(<?php echo randomHomeBackground() ;?>)">

The Cell Pages

The old application cell layout.
You can see the previous layout above. We have an annotation layer on the right which adds overlays to the existing image and changes the text displayed under the cell. We also have the ability navigate through additional cell images which change the annotation layers but still relate to the main topic.

So each page that is associated with content has a template that’s tied into ACF. It has a repeater field that lets the author associate as many title, description, and image pairings as they’d like and uses them to build the navigation on the right side.
Histology cell authoring page on the editor side.
The navigation on the bottom is built by querying other pages with the same parents. You can see a slider element in the old version and there’s been discussion about including a similar slider. I don’t believe it would work well in this scenario for a variety of reasons so I’ve been resistant. In this case we are loading a new page so scrolling would likely be slow and given the wide variation between the number of pages in these structures the layout would be awkward or intensive to develop. I also don’t see scrolling as a common way for navigating this type of web element. It’s not an interaction pattern I see elsewhere and the names don’t give you enough information for informed scrolling. I did tie the arrows to keyboard navigation via some javascript.


//KEY BINDING for nav
function leftArrowPressed() {
   var url = document.getElementById('nav-arrow-left').parentElement.href;
   window.location.href = url;
}

function rightArrowPressed() {
   var url = document.getElementById('nav-arrow-right').parentElement.href;
   window.location.href = url;

}

document.onkeydown = function(evt) {
    evt = evt || window.event;
    switch (evt.keyCode) {
        case 37:
            leftArrowPressed();
            break;
        case 39:
            rightArrowPressed();
            break;
    }
};

There’s a bunch of PHP and javascript going on to make all this happen but I wrote most of it around 2 years ago and I don’t want to inflict it on anyone. The nice thing is I’ve learned a lot in two years. The bad thing is considering rewriting the whole thing.5

You might also notice a button labeled ‘hide’ in towards the upper right, it replaces the right hand navigation names with ‘* * *’ and blanks out the text so students can quiz themselves. There’s some other possibilities there that might get more complex but that exists after the latest round of conversations.

//HIDE AND SEEK FOR QUIZ YOURSELF STUFF
function hideSlideTitles(){
    var mainSlide = document.getElementById('slide-button-0'); 
    if (mainSlide){
      var buttons = document.getElementsByClassName('button');
      var subslides = document.getElementsByClassName('sub-deep');
      for (var i = 0; i < buttons.length; i++){
        var original = buttons[i].innerHTML;
        buttons[i].innerHTML = '<span class="hidden">' + original + '</span>* * *';        
        }
      for (var i = 0; i < subslides.length; i++){
            subslides[i].classList.add('nope')
        }
        document.getElementById('the_slide_title').classList.add('nope')
        document.getElementById('the_slide_content').classList.add('nope')
        document.getElementById('quizzer').dataset.quizstate = 'hidden'
        document.getElementById('quizzer').innerHTML = 'Show'
    }
}


function showSlideTitles(){
  var mainSlide = document.getElementById('slide-button-0'); 
    if (mainSlide){
      var buttons = document.getElementsByClassName('button');

      for (var i =0; i < buttons.length; i++){
        var hidden = buttons[i].firstChild.innerHTML;
          buttons[i].innerHTML = hidden;       
        }
        document.getElementById('the_slide_title').classList.remove('nope')
        document.getElementById('the_slide_content').classList.remove('nope')
        document.getElementById('quizzer').dataset.quizstate = 'visible'
        document.getElementById('quizzer').innerHTML = 'Hide'
        var subslides = document.getElementsByClassName('sub-deep');
        for (var i = 0; i < subslides.length; i++){
            subslides[i].classList.remove('nope')
        }
    }
}


function setQuizState(){
  var state = document.getElementById('quizzer').dataset.quizstate
  if (state === 'hidden'){
    showSlideTitles()
  } else {
    hideSlideTitles()
  }
}

function retainQuizState(){
  var state = document.getElementById('quizzer').dataset.quizstate
  if (state === 'hidden'){
    hideSlideTitles()
  } else if (state === 'visible'){
    showSlideTitles()
  }
}


jQuery( document ).ready(function() {
  document.getElementById('quizzer').addEventListener("click", setQuizState);
});

Quizzes

The site also has a set of quizzes built in H5P. They’re on this page based on having the common page parent Quiz. We had to set up some custom CSS to make the images go to full size by default and then add it via some PHP so it’d work the way that we desired.

.h5p-column-content.h5p-image > img, .h5p-question-image-scalable  {
  width: 100% !important;
  height: auto !important;
  max-width: 100%  !important;
}

.h5p-question-scorebar-container {
	display: none !important;
}
function h5p_full_img_alter_styles(&$styles, $libraries, $embed_type) {
  $styles[] = (object) array(
    // Path must be relative to wp-content/uploads/h5p or absolute.
    'path' => get_stylesheet_directory_uri() . '/custom-h5p.css',
    'version' => '?ver=0.1' // Cache buster
  );
}
add_action('h5p_alter_library_styles', 'h5p_full_img_alter_styles', 10, 3);

1 Watch the pages scroll by . . .

2 I’m not sure if that’s because of the way the project got started or a result of choices I made.

3 I can’t recall what. Thus the need to write these blog posts more often.

4 I am not the cutting edge. I am not the edge. I am not the cut. I am bailing wire, duct tape, and stubbornness.

5 Refactoring if you’re nasty.

Weekly Web Harvest for 2019-02-10

  • Remote Attendance – CHI 2019
    You can hire a human proxy to bring you into the conference remotely through a wearable tablet and video conferencing software. The image below shows an example human proxy setup. Human proxies represent a lightweight way for you to be present and move throughout the conference venue for meeting other attendees and socializing. (Note: due to cost and venue challenges, we will not have telepresence robots – Beams – at CHI this year)
  • i painted
    interactive web page for the painting of painting meme thing . . .

Photography with Faculty

IMG_1239.jpg

I had the opportunity to work with Ryan Smith again recently. He’s been putting in serious work on on his website (Richmond Cemeteries) and is now turning a portion of that work into a book (Death and Rebirth in a Southern City: Richmond’s Historic Cemeteries). Ryan came by to talk a bit about pictures for the book which led to a field trip (Hebrew Cemetery and Shockoe Street Cemetery) and I think some useful reflections on how the balance between technology, technical proficiency, and art works together to make something interesting. It’s a bit of rambling tour of a series of issues that are specific to this task (getting high quality images of grave markers for a book) but are also illustrative of larger things.

Basic Considerations

Light

Light matters quite a bit. When we looked through Ryan’s initial photos many of them were taken in very bright light. That’s good in some scenarios but leads to really hard shadows. In any photo, thinking hard about where the light is and how it falls will be key in creating the image you want. Usually you want the light behind you. Usually you want it to be soft.

I showed up a little before sunrise but I didn’t have a shot list and I’d never visited the site before. That led to some mistakes or at least a poor use of time. I knew I wanted to get the “golden hour” light. That worked out ok but I didn’t really take into account the fact that the cemetery is on a hill. That led to the sun hitting one side very quickly. In the future, I’d take elevation into better account and work from East to West. I’d also visit the site once to figure out how I was going to work through the shots that were needed. You can see the better light in the first picture in this post and then how things become more harsh in the image directly below. The shadows from the markers in the foreground could be something artistic but they tend to distract from more functional/documentary images.
IMG_1369.jpg
Depending on what settings you have for the camera, you’re also dealing with how the light meter works. The camera manipulates aperture, ISO, and shutter speed to try to keep all the bright parts from being too bright and all the dark parts from being too dark. The larger the difference between the darkest portion and brightest portion, the harder it is to get them both within the spectrum captured. That how you end up with blown out skies (so bright it turns to white with no details) or with shadows that have no detail (just featureless black).

You can manipulate where the light meter in the camera samples from to end up with more control. You can also take more control over all the settings. You can also take control over which things the camera can change. I suggest doing this gradually by using aperture priority and other camera settings so that you can focus on one variable at a time. This will help you build up room in your head to hold all the variables at once (while also worrying about angles, light, background, not falling off a cliff etc.).

Also consider that you can manipulate light to a degree. You can use physical reflectors. That can be a piece of poster board or windshield shades. That’ll let you bounce light back to help light up the shadowed areas. Off-camera flash is another option but it’s far more expensive and opens up another world for consideration. That’s great if you want that but most people want easier and less choice. The Strobist would be my first step down that path if you were interested.

Background

Background awareness is a skill. It’s easy to get tunnel vision. You see something awesome and you take the picture. All your attention focuses on the object in the foreground. You end up missing the weird things in the background that distract other people. Step one is being aware of the background and then you start to integrate it with intention. There are a variety of ways to do that.

Changing angles is one way to manipulate the background. People tend to take photos from their normal standing height. Take the gate below. I moved inside to shoot it against the less buy path. I did that by elevating the camera. It did some good things but also led to some things I didn’t like.
IMG_1383.jpg

I could also drop the lens really low and try to shoot against the sky. That works in some scenarios. It doesn’t work so well in this case. It ends up being a bit too dramatic and the dark magnolia trees end up obscuring things. That plus the loss of detail in the fence makes this kind of fun but not usable.
IMG_1386.jpg

Given our scenario, I opted to retry creating some more separation using the path and keeping in mind that these images would be black and white in the book. Many cameras will let you preview the images in black and white. If you’ve got that option it’s well worth changing the settings if that’s how your images are going to end up. You may be able to “see” your color images in black and white but I find it very difficult.

This image is a fairly normal perspective and I did more post-processing work to get the background toned down. You can also see that I tightened up the crop. Another random tip, shoot wider than you need as you can also go back and trim it down but the reverse is not true.
IMG_1384.jpg

Depth of Field

The other thing you can do to help create separation in images is to manipulate the depth of field. The shallower the depth of field the more the background will blur/bokeh. This is going to be mechanically limited by the lens. If a lens says it’s f2.8, that’s the widest aperture you have and will give you the shallowest depth of field. It also lets in the most light. You can now do more stuff with software and there are some additional fringe hardware options in the future but for most people it’s easier to do it initially. You can see that in many of these pictures.

Hardware

Taking pictures is not about the camera or lens except when it is. You can have really nice equipment and still fail to take good pictures but there are times when you cannot take the picture you want without the right equipment. Figuring out the kind of pictures you want to take and investing in a decent lens oriented towards that kind of image is a good idea. People smarter than me tend to prioritize lenses over camera bodies. You can certainly dwell too much on the hardware but I dislike just how easily some people say that “the hardware doesn’t matter.”

There is an endless amount of gear you can pursue- bodies, lenses, off camera flash . . . it all makes sense in different scenarios with different budgets.

Other Hardware

But there are also other pieces of hardware that people don’t really think of- ladders for bird’s eye shots, poster board/car sunshade for reflecting light, maybe a selfie stick etc. That stuff is far cheaper than high quality lenses.

Software/Post Processing

It’s not clear to me what the lines are here. It’s interesting to look at these National Geographic rules to take note of all the things that they consider. There’s a lot there but they don’t mention things like correcting lens aberrations or correcting perspective. Does this make the image more real or is it digital fraud? I don’t know. Lots of stuff to think about and in an environment that will only get more complex.

Stuff I Need to do Better

Backwards Design

I’m not used to having a focused academic goal with my photos. I usually shoot for my own amusement or towards some rough idea of being visually interesting. That’s not the same as shooting for a book. We’re not leading with an artistic idea, we’re leading with a point that Ryan is trying to illustrate. That idea of backwards design for photos was not something I approached in enough detail for this round. When we finished, we came back and looked at the original photos vs the new photos and talked through what was working and not working. That’s when the details came out that I should have figured bout better ahead of time.

The angel series below makes for a good example. In the image below I captured the statue with the sun at my back. I framed it such that I cut off a portion of the pedestal and you’ll note that the side I captured has the broken off hand. So good for the light but bad for the statue. We did figure that out in the field so I took some shots from the other side.
IMG_1342.jpg

It turns out that the size of statue was also important. This shot deals with the missing hand but the position makes it hard to tell that the statue is large. It’s also moving towards artistic drama rather than a photo that represents what a visitor would see. It also came out that capturing the androgynous nature of the angel and some additional details were important. That’s all stuff I should have figured out ahead of time. I ended up with a variety of shots but none that really did a good job on all those elements.

Untitled
We came up with this after we came back. It is something I will think about for the future. It makes for a nice way to do planning document if pictures already exist. I think doing some phone scouting would also work pretty well. It’d give you a chance to go over the location and have something real to talk and work with.

It may also be that we can’t do all those things at once. We may have to decide on the top goals. It’ll be difficult to get the detail of the face and the entire piece in the same shot. That aspect of having to choose only one image is much more of an issue for books than for the web. Digital would open up a range of possibilities and interaction options but that’s for another day . . .
IMG_1351.jpg

You can see a similar struggle with documenting the first marker in the cemetery. One goal was to make it legible but we also wanted to show the whole thing (including that it not flat). That was complicated by the fact that the left side was pretty tightly crowded with other markers. You can see in the original photo below that you’ve got a hard shadow from fairly harsh sun. You can also see the other markers intruding a bit.
Untitled

You can an attempt at the overhead view using the step ladder in this image. We saw on-site that it wouldn’t work because while it’s better for legibility it makes it two dimensional.
IMG_1394.jpg

Here we try framing the marker more tightly. We get the lower script and the rocks but we lose the writing in the upper portion. It gets the three dimensional aspect but we have the brick walk creeping into the corner. I’m not thrilled with the contrast in general. This is one I’ll probably reshoot.
IMG_1423.jpg

I feel like I learned a lot in this attempt. It has also further helped me hone what I share with faculty and how I try to scaffold their photography skill acquisition as they consider taking on these skills. This is extra work for them in most cases and work they take on in an entirely practical way. “What is the least they need to know?” is the question I must keep asking myself. “What is the easiest path?”

Weekly Web Harvest for 2019-02-03

Get the PDFs – Google Search to Google Folder

Here’s a neat little pattern that might interest others. We got a version of the question below yesterday.

Is there a way to automatically get the links from the search linked below into a spreadsheet?

https://www.google.com/search?q=site%3Aedu+filetype%3Apdf+syllabus+education&oq=site%3Aedu+filetype%3Apdf+syllabus+education

Then, from there, is there a way to automagically get the pdf files into a Drive folder?

Step 1: Get Google Search into Google Sheets

At first it seemed this would be really simple. Amit1 had done this really well back in 2015. Unfortunately, Google has started blocking this . . . even when you do it within Google Sheets/Scripts. This made me sad. Browser emulators and Python were dancing in my head but it seemed a bit too complex for a one time action.

Instead of over-complicating things, I opted to use a Chrome plugin called Scraper. I’ve had it installed for a long time. It lets you easily do xpath scraping of websites. You can see in action in the video below.

I also used the search settings to change the number of sites per page up to 100. Once I captured the info to the clipboard I just pasted it into Google Sheets.

Step 2: Save the PDFs

Now I just needed to loop through the URLs and save the PDFs to a particular Google Folder.

This google script gets my data and loops through it.

function getUrls(){
   var ss = SpreadsheetApp.getActiveSpreadsheet();//get the spreadsheet
   var sheet = ss.getActiveSheet();//get the right sheet
   var lastRow = sheet.getLastRow();//get the last row
   var urls = sheet.getRange('B1:B'+ lastRow).getValues();//get the urls column and gets the data within it
  for each (url in urls){
    saveInDriveFolder(url);//this will do the saving
  }
}

Now to save the PDFs to a particular folder . . .

function saveInDriveFolder(url){
  var folder = DriveApp.getFolderById('YOUR_ID_HEREE');// get the folder by ID is easy and you can just copy the ID from the URL 
  var options = {'muteHttpExceptions':true}; //important as there were missing files and if you don't mute the exceptions the script will fail 
  var file = UrlFetchApp.fetch(url, options); // get the file 
  if (file) {
  folder.createFile(file);//create the file in the folder
  }
}

1 A Google scripts man/myth/legend

KSES and Voice Thread’s Embed Code

Voice Thread’s embed code should like what you see below.

<iframe width="480" height="270" src="https://auth.voicethread.com/app/player/?threadId=7647336" frameborder="0" allowusermedia allowfullscreen allow="camera https://auth.voicethread.com; microphone https://auth.voicethread.com; fullscreen https://auth.voicethread.com;"></iframe>

For non-super admins who have iframe embed rights we were getting this instead (once the post had updated and been cleansed by our friend kses).

<iframe width="480" height="270" src="https://auth.voicethread.com/app/player/?threadId=7647336" frameborder="0"></iframe>

It still mostly worked but we needed to tweak things to let in those additional variables (allowusermedia allowfullscreen allow). You can see pieces we needed to add to the iframe array in the comments below.


add_filter( 'wp_kses_allowed_html', 'esw_author_cap_filter',1,1 );

function esw_author_cap_filter( $allowedposttags ) {


if ( !current_user_can( 'publish_posts' ) ) //we set this for authors and higher 
return $allowedposttags;

// Here add tags and attributes you want to allow

$allowedposttags['iframe']=array(

'align' => true,
'width' => true,
'height' => true,
'frameborder' => true,
'name' => true,
'src' => true,
'id' => true,
'class' => true,
'style' => true,
'scrolling' => true,
'marginwidth' => true,
'marginheight' => true,
'allowfullscreen' => true, 
'mozallowfullscreen' => true, 
'webkitallowfullscreen' => true,
'allowusermedia' => true,//*******************newly added
'allowfullscreen' => true,//*******************newly added
'allow' => true,//*******************newly added

);

$allowedposttags["object"] = array(
 "height" => array(),
 "width" => array()
);
 
$allowedposttags["param"] = array(
 "name" => array(),
 "value" => array()
);

$allowedposttags["embed"] = array(
 "type" => array(),
 "src" => array(),
 "flashvars" => array()
);


return $allowedposttags;

}

Gravity Forms User Registration After the Fact

Gravity Forms lets you set up user registration via an additional plugin but it require some setup. It’s not hard to run into a scenario where you though people were getting registered but they were not. Not a big deal if it’s a handful of people but not pleasant if it’s more than that.

I wrote this little plugin the other night to deal with a scenario like this. The comments below explain most of the important bits. It will require you to know what your form ID is, the form field IDs, and the blog ID of the site you want to add the users to.

I trigger the function by attaching it to a shortcode and sticking that shortcode in a post or page. I’m not sure that’s the best idea but it seems to work fine.


<?php 
/*
Plugin Name: turn back time user maker
Plugin URI: https://github.com/woodwardtw/
Description: 
Author: Tom Woodward
Version: 1.5
Author URI: http://bionicteaching.com/
*/

//gravity form fetch
function make_users_now(){					    
					$search_criteria = array();
					$sorting = array();
					$paging = array( 'offset' => 0, 'page_size' => 100 );//set to deal with up to 100 entries

					  $entries  = GFAPI::get_entries( 1, $search_criteria, $sorting, $paging  );//the first number is the form you're referencing

					  if ( !empty($entries) ){
					        foreach ($entries as $entry) {  
						        $user_name = $entry['2'];//match these up with the fields in your form
						        $password = $entry['4'];
						        $email = $entry['3'];
								
								//make the user if their email isn't already there

								if ( !email_exists( $email ) ){
									$user_id = wpmu_create_user($user_name, $password, $email);
								}

								//add the user to the ddp site
								add_user_to_blog(19, $user_id, 'author');//make the first number (19) match blog you want to add them to

						}
					}
				}

add_shortcode( 'makeusersnow', 'make_users_now' );//stick this shortcode on a page and visit the page to trigger this

Weekly Web Harvest for 2019-01-20

  • Tree Profile: Aspen – So Much More Than a Tree – National Forest Foundation
    One aspen tree is actually only a small part of a larger organism. A stand or group of aspen trees is considered a singular organism with the main life force underground in the extensive root system. Before a single aspen trunk appears above the surface, the root system may lie dormant for many years until the conditions are just right, including sufficient sunlight. In a single stand, each tree is a genetic replicate of the other, hence the name a “clone” of aspens used to describe a stand.
  • Amazon.com: The Field of Blood: Violence in Congress and the Road to Civil War eBook: Joanne B. Freeman: Kindle Store
    In The Field of Blood, Joanne B. Freeman recovers the long-lost story of physical violence on the floor of the U.S. Congress.
  • Arborists Have Cloned Ancient Redwoods From Their Massive Stumps – Yale E360
    A team of arborists has successfully cloned and grown saplings from the stumps of some of the world’s oldest and largest coast redwoods, some of which were 3,000 years old and measured 35 feet in diameter when they were cut down in the 19th and 20th centuries. Earlier this month, 75 of the cloned saplings were planted at the Presidio national park in San Francisco.

    *****
    They tried to escape but we would not even let them die.

  • Virtual Reality Quarterback Training | BurstVR
  • The History of Teaching Machines
    from Audrey Watters
    http://audreywatters.com/

  • Life Without the Tech Giants
    It’s not just logging off of Facebook; it’s logging off the countless websites that use Facebook to log in. It’s not just using DuckDuckGo instead of Google search; it’s abandoning my email, switching browsers, giving up a smartphone, and living life without mapping apps. It’s not just refusing to buy toilet paper on Amazon.com; it’s being blocked from reading giant swaths of the internet that are hosted on Amazon servers, giving up websites and apps that I didn’t previously know were connected to the biggest internet giant of them all.

Motherblog Plugin Error

Ran into an interesting bug today on the Motherblog plugin. Despite being in use for a number of years this is the first time we’ve run into this particular issue.

It seemed that if a student was using their blog for two different courses and the two courses used identical subcategories for assignments the subcategories would not be created on the student blog.

So if class A did something like
[altlab-motherblog category="ClassA" sub_categories="Blog 1, Blog 2, Blog 3"]

And class B did
[altlab-motherblog category="ClassB" sub_categories="Blog 1, Blog 2, Blog 3"]

Then whichever one went first would work fine but the second one would only duplicate the parent category.

This plugin was written by Mark Luetke who’s been gone for a long time now. It’s often not easy to debug my own work and it’s harder to parse out someone else’s work. After a bit of scanning I did find the following function. Note that it’s named something sensible which made it much easier to find.

  function create_sub_categories($string, $category){
                
                if ($string){
                    
                    $string = str_replace(' ', '', $string);
                    $array = explode(',',$string);
                    foreach( $array as $item ){
                        
                        $the_sub_category = get_term_by('name', $item, 'category');
                     
                        if( !$the_sub_category){
                            $args = array(
                                'parent' => $category->term_id,
                            );
                            wp_insert_term( $item, 'category', $args );                           
                        }   
                    }
                }
            }

The key component ended up being the catch that looks for duplicates.

$the_sub_category = get_term_by('name', $item, 'category');

There’s no way I saw to modify this to check for the parent ID first. The plugin just asks if the term exists and then if it doesn’t — !$the_sub_category — it makes the subcategory. Our problem was that the term did exist but we wanted the extra step to see that if the term did exist was it a child of the parent category. Turns out that that get_term_by does return the parent ID so we can add an OR statement that says make the subcategories if they don’t exist or if their parent doesn’t equal the parent category that we’ve just made.

That looks like the piece below. Just a tiny modification.

  function create_sub_categories($string, $category){
                
                if ($string){
                    
                    $string = str_replace(' ', '', $string);
                    $array = explode(',',$string);
                    foreach( $array as $item ){
                        
                        $the_sub_category = get_term_by('name', $item, 'category');
                     
                        if( !$the_sub_category || (int)$the_sub_category->parent != (int)$category->term_id  ){
                            //expanded if statement to deal with duplicate sub cats on destination blog
                            $args = array(
                                'parent' => $category->term_id,
                            );
                            wp_insert_term( $item, 'category', $args );                           
                        }   
                    }
                }
            }