• Annonces

    • Olivier Devriese

      Nouveauté du forum, les blogs !   30/04/2016

      Envie de créer votre blog FileMaker ? Ca ne peut pas être plus simple qu'avec cette nouvelle section du site FM Source qui est désormais plus qu'un simple forum mais un vrai centre de ressource. Vous pouvez aussi facilement l'alimenter en le liant avec le flux RSS d'un autre blog que vous possédez déjà.
  • billets
    54
  • commentaires
    3
  • vues
    1 200

Billets dans ce blog

Soliant Consulting

In my previous blog post I wrote about handling form data with Formidable, but I didn’t mention how to work with file uploads. This is because Formidable by itself does not handle file uploads at all, but only string data. So by now many people asked be already how to handle that, if not with that library itself. My answer to that is quite simple: Use the tools your PSR-7 middleware file upload implementation already gives you.

Meet the UploadedFileInterface

Any library implementing PSR-7 has a method getUploadedFiles() on their server request implementation. This method returns an array of objects implementing Psr\Http\Message\UploadedFileInterface. There are many ways that files can be transmitted to the server, so let’s roll with the simplest one right now, where you have a form with a single file input and nothing else, in which case your middleware may look something like this:

<?php use Interop\Http\ServerMiddleware\DelegateInterface; use Interop\Http\ServerMiddleware\MiddlewareInterface; use Psr\Http\Message\ServerRequestInterface; use Psr\Http\Message\UploadedFileInterface; final class UploadMiddleware implements MiddlewareInterface { public function process(ServerRequestInterface $request, DelegateInterface $delegate) { $uploadedFiles = $request->getUploadedFiles();
        
        if (!array_key_exists('file', $uploadedFiles)) {
            // Return an error response
        }
        
        /* @var $file UploadedFileInterface */
        $file = $uploadedFiles['file'];

        if (UPLOAD_ERR_OK !== $file->getError()) {
            // Return error response
        }
        
        $file->moveTo('/storage/location');
        
        // At this point you may want to check if the uploaded file matches the criteria your domain dictates. If you
        // want to check for valid images, you may try to load it with Imagick, or use finfo to validate the mime type.
        
        // Return successful response
    }
}

This is a very basic example, but it illustrates how to handle any kind of file upload. Please note that the Psr\Http\Message\UploadedFileInterface doesn’t give you access to temporary file name, so you actually have to move it to another location first before doing any checks on the file. This is to ensure that the file was actually uploaded and is not coming from any malicious source.

Integration with Formidable

The previous example just gave you an idea for handling a PSR-7 middleware file upload on its own, without any further data transmitted with the file. If you want to first validate your POST data, your middleware could look similar to this:

<?php use DASPRiD\Formidable\FormError\FormError; use DASPRiD\Formidable\FormInterface; use Interop\Http\ServerMiddleware\DelegateInterface; use Interop\Http\ServerMiddleware\MiddlewareInterface; use Psr\Http\Message\ServerRequestInterface; final class UploadMiddleware implements MiddlewareInterfacenterface { /** * @var FormInterface */ private $form; public function process(ServerRequestInterface $request, DelegateInterface $delegate) { $form = $this->form;
        
        if ('POST' === $request->getMethod()) {
            $form = $form->bindFromRequest($request);
            
            if (!$form->hasErrors()) {
                $fileUploadSuccess = $this->processFileUpload($request);
                
                if (!$fileUploadSuccess) {
                    // Persist $form->getValue();
                }
                
                $form->withError(new FormError('file', 'Upload error'));
            }
        }

        // Render HTML with $form
    }
    
    private function processFileUpload(ServerRequestInterface $request) : bool
    {
        // Do the same checks as in the previous example
    }
}

As you can see, you simply stack the file upload processing onto the normal form handling, they don’t have to interact at all, except putting an error on the form for the file element.

The post PSR-7 Middleware File Upload with Formidable appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

If you use Pardot to handle your marketing campaigns and have tried to integrate your Google AdWords to your Salesforce org, you have probably noticed that Google does not provide any step-by-step solutions on how to integrate all three of them together to track your clickable ads. It took some time, but after some coding changes and a rather long phone call with Google, there is a solution that can now be followed to solve this.

If you are using a native Salesforce web-to-lead form, then you can find standard support here from Google. If you use Pardot for your landing pages, continue reading below to get some help integrating Google AdWords and Salesforce through Pardot.

Setting Up Your Files

Create new GCLID Fields

To start off, let’s create new GCLID fields on both the opportunity and lead objects. See Figures 1 and 2 below.

Add the GCID field to the Oportunity

Figure 1. Add the GCID field to the Opportunity (click image to enlarge).

Add tje GCID field to the Lead

Figure 2. Add the GCID field to the Lead (click image to enlarge).

After the two fields have been created on the opportunity and lead objects, we must map the fields, as shown in Figures 3 and 4.

Figure 3 for the "Integrate Google AdWords with Your Salesforce Org Through Pardot

Figure 3. Begin to map the lead fields.

Figure 4. Mapping the Lead and GCID fields

Figure 4. Mapping the Lead and GCID fields

Add the script to your landing pages

Now that the configurations have been completed, it’s time to touch some code on your website. If you don’t have access to this, contact your webmaster to help with this step. A cookie value needs to be stored on your website to save the GCLID based on the ad that is clicked on. The following script should be added before your tags on all of your landing pages on the website.


<script type="text/javascript">
    function setCookie(name, value, days) {
        var date = new Date();
        date.setTime(date.getTime() + (days * 24 * 60 * 60 * 1000));
        var expires = "; expires=" + date.toGMTString();
        document.cookie = name + "=" + value + expires + ";domain=" + location.hostname.replace("www.", \'\');
    }
    function getParam(p) {
        var match = RegExp(\'[?&]\' + p + \'=([^&]*)\').exec(window.location.search);
        return match && decodeURIComponent(match[1].replace(/\+/g, \' \'));
    }
    var gclid = getParam(\'gclid\');
    if (gclid) {
        var gclsrc = getParam(\'gclsrc\');
        if (!gclsrc || gclsrc.indexOf(\'aw\') !== -1) {
            setCookie(\'gclid\', gclid, 90);
        }
    }
</script>

Create a hidden field

Once this step is completed, we will now focus on the Pardot portion of the integration. To start off, on your landing pages, create a hidden field labeled GCLID.

Figure 5. Create a hidden GCID field

Figure 5 (click image to enlarge)

Add code snippet to your form

Next, on the same form, click on “Look and Feel” on the menu bar towards the top of the page. You will see a “Below Form” tab which should be clicked on. When clicked, all the way to the right you will see an html button (next to the omega symbol) click that.

Figure 6. Add code snippet to your form.

Figure 6 (click image to enlarge)

<script>
  window.onload = function getGclid() {
    document.getElementByID("xxxx").value = (name = new RegExp('(?:^|;\\s*)gclid=([^;]*)').exec(document.cookie)) ? name.split(",")[1] : ""; }   
</script>

After this piece of code is inserted into the Pardot form, you are now ready to test the integration between Salesforce and Google AdWords through Pardot. In the URL of your contact us page, add “?gclid=blogTest” (or any testing word) at the end as shown below.


www.soliantconsulting.com/contact?gclid=blogTest

Find the information submitted

Once you submit the lead information, in Salesforce, go to Leads and find the information that you submitted (see Figure 7).

Figure 7. Find the information you submitted

Figure 7 (click image to enlarge)

Keyword added to the GCLID field

In the GCLID field, you should see the keyword that you entered at the end of the URL in the step above, in my case being “blogTest” as shown in Figure 8.

Figure 8. Keyword added to the GCLID field

Figure 8 (click image to enlarge)

When the link is successful — meaning you see the keyword “blogTest” that you entered into the URL in your lead in Salesforce — then you have now integrated Google AdWords with Salesforce through Pardot! The final step, is to link your Salesforce account to your Google AdWords account.

Link Your Salesforce and Google AdWords Accounts

Sign in to your Google AdWords account and on the right hand side next to your customer id, you will see a cog. When you click on the cog, there should be a link called “Linked accounts.”

Figure 9. Click on the cog to access "Linked accounts"

Figure 9

Choose accounts to link to Google AdWords

After you have clicked the Linked accounts link, you should be on the following page. Here you can choose which accounts to link to your Google AdWords account. In our case, click on “View details” under Salesforce.com.

Figure 10. Choose an account to link to Google AdWords

Figure 10

Log into your Salesforce organization

Finally, click on the “+ Account” button on the page and you will be redirected to the Salesforce authentication page to login to your Salesforce organization.

Figure 11. Use the add account button.

Figure 11 (click image to enlarge)

Once your Salesforce organization is linked, you will be prompted to set up conversions that are relevant to your Google ads. After you set up these conversions, you are now ready to completely track your clickable ads with AdWords and Salesforce through Pardot.

The post Integrate Google AdWords with Your Salesforce Org Through Pardot appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

Introduction

For many years, I’ve been using Zend_Form from Zend Framework 1, Zend\Form from Zend Framework 2 and also a few other form libraries. With the advent of Zend Framework 3 and more type hinting options in PHP 7 I started to wonder if there is a way to handle forms in a nicer way. I got a little sick of libraries trying to dictate the resulting HTML or just making it really hard to create custom HTML.

So what I did is what I always do when I’m in this position; I look around different frameworks, even from other languages, to see how others solved the problem. After a few days of research, I ended up liking the approach of the Play Framework a lot, specifically the one in their Scala implementation. The first thing I did was of course learning to read Scala, which took me a little while because the syntax is quite different than what I was used to. After that I was able to understand the structure and how things worked, so I could start writing a PHP library based on that, named Formidable.

How it works

Formidable works similar to the form libraries you are already familiar with, yet it is slightly different. There is no mechanism in place to render any HTML, although it comes with a few helpers to render generic input elements, but those are mostly for demonstration to build your own renderers on. Also, every object within Formidable is considered immutable, so when passing around a form object, you can be sure that it’s just for you and nothing else modified it.

A form object always has a mapping assigned, which takes care of translating values between the input (usually POST) and a value object. There is no magic going on to hydrate entities directly, but everything goes through those value objects. The mappings are also responsible for validating your input, but offer no filter mechanism. Before I started writing this library, I analyzed all of my prior projects and discussed with other developers, and the only real pre-validation filtering we ever did was always just triming the input, which also became a default in Formidable. In the rare use cases we could think of where special filters really were called for, we decided

I won’t go into detail about how you build forms with Formidable, as that topic is explained in detail in the Formidable documentation. Instead, I’m going to tell you about how to use the resulting forms properly.

Using Formidable forms

Let’s say we have a form for blog entries, which would mean that we’ll have a value object taking the title and the content from the form, and also being responsible for actually creating blog entries from itself and updating existing ones:

Example value object

final class BlogEntryData
{
    private $title;
    private $content;
    
    public function __construct(string $title, string $content)
    {
        $this->title = $title;
        $this->content = $content;
    }
    
    public static function fromBlogEntry(BlogEntry $blogEntry) : self
    {
        return new self(
            $blogEntry->getTitle(),
            $blogEntry->getContent()
        );
    }
    
    public function createBlogEntry(int $creatorId) : BlogEntry
    {
        return new BlogEntry($creatorId, $this->title, $this->content);
    }
    
    public function updateBlogEntry(BlogEntry $blogEntry) : void
    {
        $blogEntry->update($this->title, $this->content);
    }
}

As you can see, our value object has all the logic nicely encapsulated to work with the actual blog entry. Now let’s see how our middleware for creating blog entries would look like:

Example create middleware

use DASPRiD\Formidable\Form;
use Psr\Http\Message\ServerRequestInterface;

final class CreateBlogEntry
{
    private $form;
    
    public function __construct(Form $form)
    {
        $this->form = $form;
    }

    public function __invoke(ServerRequestInterface $request)
    {
        if ('POST' === $request->getMethod()) {
            $form = $this->form->bindFromRequest($request);
            
            if (!$form->hasErrors()) {
                $blogEntryData = $form->getValue();
                persistSomewhere($blogEntryData->createBlogEntry(getUserId()));
            }
        } else {
            $form = $this->form;
        }
        
        return renderViewWithForm($form);
    }
}

The update middleware requires a bit more work, since we have to work with an already existing blog entry, but it will mostly look the same to our create middleware:

Example update middleware

use DASPRiD\Formidable\Form;
use Psr\Http\Message\ServerRequestInterface;

final class UpdateBlogEntry
{
    private $form;
    
    public function __construct(Form $form)
    {
        $this->form = $form;
    }

    public function __invoke(ServerRequestInterface $request)
    {
        $blogEntry = getBlogEntryToEdit();
            
        if ('POST' === $request->getMethod()) {
            $form = $this->form->bindFromRequest($request);
            
            if (!$form->hasErrors()) {
                $blogEntryData = $form->getValue();
                $blogEntryData->update($blogEntry);
                persistSomewhere($blogEntry);
            }
        } else {
            $form = $this->form->fill(BlogEntryData::fromBlogEntry($blogEntry));
        }
        
        return renderViewWithForm($form);
    }
}

Rendering

As I wrote earlier, Formidable is in no way responsible for rendering your forms. What it does give you though is all the field values and error messages you need to render your form. By itself it doesn’t tell you which fields exist on the form, so your view does need to know about that. Again, the documentation gives you a very good insight about how you can render your forms with helpers, but here is a completely manual approach to it, to illustrate how Formidable works at the fundamental level:

Example form HTML

<form method="POST">
    <?php if ($form->hasGlobalErrors()): ?>
        <ul class="errors">
            <?php foreach ($form->getGlobalErrors() as $error): ?>
                <li><?php echo htmlspecialchars($error->getMessage()); ?></li>
            <?php endforeach; ?>
        </ul>
    <?php endif; ?>

    <?php $field = $form->getField('title'); ?>
    <label for="title">Title:</label>
    <input type="text" name="title" id="title" value="<?php echo htmlspecialchars($field->getValue()); ?>">
    <?php if ($field->hasErrors()): ?>
        <ul class="errors">
            <?php foreach ($field->getErrors() as $error): ?>
                <li><?php echo htmlspecialchars($error->getMessage()); ?></li>
            <?php endforeach; ?>
        </ul>
    <?php endif; ?>
    
    <?php $field = $form->getField('content'); ?>
    <label for="title">Content:</label>
    <textarea name="title" id="title"><?php echo htmlspecialchars($field->getValue()); ?></textarea>
    <?php if ($field->hasErrors()): ?>
        <ul class="errors">
            <?php foreach ($field->getErrors() as $error): ?>
                <li><?php echo htmlspecialchars($error->getMessage()); ?></li>
            <?php endforeach; ?>
        </ul>
    <?php endif; ?>
    
    <input type="submit">
</form>

As I said, this is a very basic approach with a lot of repeated code. Of course you are advised to write your own helpers to render the HTML as your project calls for it. What I personally end up doing most of the time is writing a few helpers which wrap around the helpers supplied by Formidable and have them wrap the labels and other HTML markup around the created inputs, selects and textareas. There is a big advantage to decoupling presentation from the form library which you may already appreciate if you’ve wrestled with other popular libraries which bake in assumptions about how to markup the output.

Final words

I hope that this blog post gave you a few insights on Formidable and made you hungry to try it out yourself. It currently supports PHP 7.0 and up, and I like to get feedback when you see anything missing or something which can be improved. As written on the Github repository, there is still a small part missing to make it fully typehinted, which are generics in PHP.

I’ve created an RFC together with Rasmus Schultz a while back, but we are currently missing an implementer, which is why the RFC is somewhat on hold. If you know something about PHP internals, feel free to hop in to make generics a reality for us!

I really have to thank Soliant at this point, who sponsored the development time to create Formidable!

The post Formidable – A Different Approach to Forms appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting
Salesforce Lightning looks great and works beautifully. To enhance it, I’ve added a new Multiselect component. Enjoy!

Salesforce Lightning Multiselect

This is another component blog… just a small one this time, showing you how to create and use my new Multiselect component.

For some of my other components, please look here:

What I’m going to show is how to take the static HTML defined on the Salesforce Lightning Design System (SLDS) web page and turn that into an actual, working component.

Method

  • Define the event that you’ll be using first. This event is used to tell the parent component that the selected value(s) have changed
  • The event is called the “SelectChange” event.
<aura:event type="COMPONENT" description="Despatched when a select has changed value" >
  <aura:attribute name="values" type="String[]" description="Selected values" access="global" />
</aura:event>

Next, we add the markup for the actual component itself. It is composed of the button to trigger the dropdown and the dropdown itself. The button contains an icon triangle and some text indicating what has been selected. The dropdown list is an list driven by aura iteration. All selection/deselection logic is driven by the controller and helper classes.

<aura:component >

  <!-public attributes-->
  <aura:attribute name="options" type="SelectItem[]" />
  <aura:attribute name="selectedItems" type="String[]" />
  <aura:attribute name="width" type="String" default="240px;" />
  <aura:attribute name="dropdownLength" type="Integer" default="5" />
  <aura:attribute name="dropdownOver" type="Boolean" default="false" />
	
	<!-private attributes-->
  <aura:attribute name="options_" type="SelectItem[]" />
  <aura:attribute name="infoText" type="String" default="Select an option..." />
		
	<!-let the framework know that we can dispatch this event-->
  <aura:registerEvent name="selectChange" type="c:SelectChange" />

  <aura:method name="reInit" action="{!c.init}"
      description="Allows the lookup to be reinitalized">
  </aura:method>

  <div aura:id="main-div"  class=" slds-picklist slds-dropdown-trigger slds-dropdown-trigger--click ">
	
	  <!-the disclosure triangle button-->
    <button class="slds-button slds-button--neutral slds-picklist__label" style="{!'width:' + v.width }" 
      aria-haspopup="true" onclick="{!c.handleClick}" onmouseleave="{!c.handleMouseOutButton}">
      <span class="slds-truncate" title="{!v.infoText}">{!v.infoText}</span>
      <lightning:icon iconName="utility:down" size="small" class="slds-icon" />
    </button>

	<!-the multiselect list-->
    <div class="slds-dropdown slds-dropdown--left" onmouseenter="{!c.handleMouseEnter}" onmouseleave="{!c.handleMouseLeave}">
      <ul class="{!'slds-dropdown__list slds-dropdown--length-' + v.dropdownLength}" role="menu">

        <aura:iteration items="{!v.options_}" var="option">
          <li class="{!'slds-dropdown__item ' + (option.selected ? 'slds-is-selected' : '')}" 
            role="presentation" onclick="{!c.handleSelection}" data-value="{!option.value}" data-selected="{!option.selected}">
            <a href="javascript:void(0);" role="menuitemcheckbox" aria-checked="true" tabindex="0" >
              <span class="slds-truncate">
            <lightning:icon iconName="utility:check" size="x-small" class="slds-icon slds-icon--selected slds-icon--x-small slds-icon-text-default slds-m-right--x-small" />{!option.value}
          </span>
            </a>
          </li>
        </aura:iteration>

      </ul>
    </div>
  </div>
</aura:component>

As you can see, this is mostly just basic HTML and CSS using the Salesforce Lightning Design System.
To make it work, we implement a Javascript controller and handler.

These Javascript objects load and sort “items” into the select list:

   init: function(component, event, helper) {

      //note, we get options and set options_
      //options_ is the private version and we use this from now on.
      //this is to allow us to sort the options array before rendering
      var options = component.get("v.options");
      options.sort(function compare(a,b) {
                     if (a.value == 'All'){
                       return -1;
                     }
                     else if (a.value &lt; b.value){
                       return -1;
                     }
                     if (a.value &gt; b.value){
                       return 1;
                     }
                     return 0;
                   });

      component.set("v.options_",options);
      var values = helper.getSelectedValues(component);
      helper.setInfoText(component,values);
    },

As you can see, I’m not touching any HTML – I’m relying on Lightning’s binding framework to do the
Actual rendering – by adding to the options list, Lightning will apply that to the “
object defined in the component and render the list (hidden initially).
Also note that there is an ‘All’ value that the system expects. Change this to whatever you like, or
even remove it, but remember to change the text here in the controller :).

Another interesting area to explain is how selecting/deselecting is done:

    handleSelection: function(component, event, helper) {
      var item = event.currentTarget;
      if (item &amp;&amp; item.dataset) {
        var value = item.dataset.value;
        var selected = item.dataset.selected;

        var options = component.get("v.options_");

        //shift key ADDS to the list (unless clicking on a previously selected item)
        //also, shift key does not close the dropdown (uses mouse out to do that)
        if (event.shiftKey) {
          options.forEach(function(element) {
            if (element.value == value) {
              element.selected = selected == "true" ? false : true;
            }
          });
        } else {
          options.forEach(function(element) {
            if (element.value == value) {
              element.selected = selected == "true" ? false : true;
            } else {
              element.selected = false;
            }
          });
          var mainDiv = component.find('main-div');
          $A.util.removeClass(mainDiv, 'slds-is-open');
        }
        component.set("v.options_", options);
        var values = helper.getSelectedValues(component);
        var labels = helper.getSelectedLabels(component);
        
        helper.setInfoText(component,values);
        helper.despatchSelectChangeEvent(component,labels);

      }
    },

I am using a custom object: ‘SelectItem’ because I’m not able to create a ‘selected’ attribute on
Salesforce’s built in version. In the code above, I’m looking at this value and either adding
the item to the list, replacing the list with this one item or removing it. In this case I’m using
the shift key, but this can be customized to any key. Finally, I update the text with the new value
and if multiple value, the count of values.

One tricky area was handling hiding and showing of the select list – I use the technique below:


    handleClick: function(component, event, helper) {
      var mainDiv = component.find('main-div');
      $A.util.addClass(mainDiv, 'slds-is-open');
    },

    handleMouseLeave: function(component, event, helper) {
      component.set("v.dropdownOver",false);
      var mainDiv = component.find('main-div');
      $A.util.removeClass(mainDiv, 'slds-is-open');
    },
    
    handleMouseEnter: function(component, event, helper) {
      component.set("v.dropdownOver",true);
    },

    handleMouseOutButton: function(component, event, helper) {
      window.setTimeout(
        $A.getCallback(function() {
          if (component.isValid()) {
            //if dropdown over, user has hovered over the dropdown, so don't close.
            if (component.get("v.dropdownOver")) {
              return;
            }
            var mainDiv = component.find('main-div');
            $A.util.removeClass(mainDiv, 'slds-is-open');
          }
        }), 200
      );
    }
  }
  • When the button is clicked, the list is shown.
  • When the mouse leaves the button, but does not enter the dropdown – it closes
  • When the mouse leaves the button, and enters the dropdown, the close is cancelled.
  • When the mouse leaves the list, it hides.

Seems simple, but getting it working nicely can be tough.

To use, simply add as part of a form (or without if you’d like):

<div class="slds-form-element">
    <label class="slds-form-element__label" for="my-multi-select">Multi Select!!</label>
    <div class="slds-form-element__control">
        <c:MultiSelect aura:id="my-multi-select" options="{!v.myOptions}" selectChange="{!c.handleSelectChangeEvent}" selectedItems="{!v.mySelectedItems}" />
    </div>
</div>

Here’s what it looks like:

MultiSelect

The MultiSelect item in action

That’s all for now.

Enjoy!

The post Create a Custom Salesforce Lightning Multiselect Component appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

This blog post examines the functionality of two of FileMaker’s features and how they work together. The first is the Web Viewer, which is a special layout object that can display web content right in your FileMaker app. The next is WebDirect, which is FileMaker Server’s ability to automatically display your custom FileMaker app in a web browser.

Web Viewers and WebDirect

We have received several inquiries regarding the issue of Web Viewers not rendering in WebDirect. As these techniques become more popular, this may be an issue more developers experience. When first debugging the issue, it was assumed to be a limitation of WebDirect. However, after discussing with co-workers Jeremy Brown and Ross Johnson, a couple workarounds were discovered. The solution discussed here is the simplest and most elegant.

First, the Web Viewer, when shown on a FileMaker Pro layout, runs as its own independent web page, just like you would open a new tab in your web browser and load a URL. However, in WebDirect, content needs to be loaded inside the web page as the content of an “iframe” entity. Iframes are a special type of HTML entity meant to easily specify and display other HTML content to display within that iframe object.

The remote content of an iframe object is referenced as an attribute, at a very basic level, like so:

<iframe src="your_url_here"></iframe>

Seems pretty straightforward, right? However, arbitrarily long URLs or odd characters may cause the iframe to break and not load.

JavaScript Graphs

JavaScript can be a great option to expand the functionality to include just about any type of graph you can imagine and populate it with your FileMaker data.

If you have used JavaScript, such as in Jeremy Brown’s useful Web Viewer Integrations Library,  to display graphs in the Web Viewer via data URLs, you may run into issues when displaying in WebDirect.

Data URIs

You are probably familiar with URLs that start with “http” or https” but there are many other types of uniform resource identifiers (URI). A data URI, instead of including a location, embeds the data to be displayed directly in the document. We commonly use them in FileMaker to construct HTML to display in a web viewer, and avoid network latency and dependencies, including JavaScript.

For example, setting Web Viewer with html, preceding it like this:

"data:text/html,<html>…</html>"

The issue with displaying arbitrarily large or complex data URLs in WebDirect is that the “src” attribute has the potential to break with some JavaScript included as part of the data URI. There is likely an unsupported character or combination somewhere in the included libraries that makes it incompatible with loading as a data URI directly.

What to Do?

Part of the syntax of a data URI allows for specifying the content as being encoded as Base64.

data:[<mediatype>][;base64],<data>

Typically, you would use this to represent non-textual data, such as images or other binary data. In this case, it can still be applied when the media type is “text/html” as well.

This provides a safe way of transferring that html data so it will be unencoded by the web browser, where it is rendered at runtime.

Admittedly, this introduces a little more processing that has to happen somewhere, and can cause a slight delay when rendering in FileMaker Pro vs. not encoding as Base64. However, we can test to see if a user is in WebDirect or not, and direct the output of the Web Viewer appropriately.

Case ( 
  PatternCount ( Get ( ApplicationVersion ) ; "Web" ) ;
  "data:text/html;base64," & Base64Encode ( HTML::HTML_Calc_Here ) ;
  "data:text/html," & HTML::HTML_Calc_Here
)

Note the addition of “;base64” if the application is coming from a “Web” client. With this test, we optimize for both clients and ensure that our content functions everywhere.

Here is the result in FileMaker Pro:
Screenshot of results in FileMaker Pro

Results in FileMaker Pro (click image to enlarge).


The same layout viewed in WebDirect
Screenshot of layout viewed in WebDirect

Layout viewed in WebDirect (click image to enlarge).

You really have to look twice to see what screen shot belongs to which application!

Other Considerations

There are other factors to consider that may cause issues as well. So far, the assumption has been made that all JavaScript and assets are being loaded inline, without externally references. You may still choose to have external references. Just be aware that loading them in an iframe element may behave differently than how they are handled in a FileMaker Pro client.

It is a best practice to have an SSL certificate installed on your production FileMaker Server, and WebDirect will automatically use that certificate as well. That means that, with SSL enabled, WebDirect will redirect clients from HTTP requests to HTTPS. The consequence of that is that all your content must also be secure, as far as your web browser is concerned. A HTTP site can reference HTTPS assets, but not the other way around. Make sure if you have SSL enabled that all external references, such as linked JavaScript libraries, are all referenced with HTTPS as well.

For development servers using a self signed certificate… well, pretty much nothing will load correctly because the web browser will not want to load anything served from a certificate it cannot verify. The main site will load, but not when trying to include content from other sites in the page.

Then there are occasions where you may need to write your own web page to display in a Web Viewer, hosted from another web server entirely. In that case, you may need to enable CORS headers for it to work. Again, in FileMaker Pro clients it works fine, but in WebDirect it loads as an iframe, and becomes a security concern in web browsers to prevent cross site scripting.

How to Support CORS in PHP

If you host your PHP page from the same FileMaker Server, making sure to match http vs. https, then there is no conflict about JavaScript loading from a different source. If, for some reason, you want to have the file load from a different location, you will want to add CORS support in your PHP file as well.

The final PHP file will look something like this:

<?php
// enable cors
// Allow from any origin
if (isset($_SERVER['HTTP_ORIGIN'])) {
 header("Access-Control-Allow-Origin: {$_SERVER['HTTP_ORIGIN']}");
 header('Access-Control-Allow-Credentials: true');
 header('Access-Control-Max-Age: 86400'); // cache for 1 day
}
// Access-Control headers are received during OPTIONS requests
if ($_SERVER['REQUEST_METHOD'] == 'OPTIONS') {
if (isset($_SERVER['HTTP_ACCESS_CONTROL_REQUEST_METHOD']))
header("Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS");
if (isset($_SERVER['HTTP_ACCESS_CONTROL_REQUEST_HEADERS']))
 header("Access-Control-Allow-Headers: {$_SERVER['HTTP_ACCESS_CONTROL_REQUEST_HEADERS']}");
}

One other consideration, which I found when using one FileMaker Server to host a file for different WebDirect served solutions, was that there is an added HTTP header that is configured in the default site on FileMaker Server’s web server. This is done for added security for WebDirect to protect against cross site scripting attacks, so you may or may not want to adjust this setting for your needs.

If on a Windows server, you will find this setting in the IIS configuration for HTTP Headers, that it adds a header for “X-Frame-Options” set to require the same origin. If you need to serve this PHP page from a different server, you will need to remove this header as being served by default. Then, in addition to the CORS support, this script will work from different servers. This may be seen as lowering the security on that machine and should probably be avoided by hosting your scripts on a different server, if needed.

References

 

The post Display Complex Web Viewers in WebDirect appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

I’m in the process of studying for my Salesforce certification, and it’s not easy! If you’re ahead of me and already have your certification, you’ve proven that you know all about the newest Salesforce release and you’re ready to send your company’s Salesforce ROI through the roof. That’s a major accomplishment, but make sure you hold onto it!

So how do you hold onto your Salesforce certification? The good news is that you don’t have to take the full exam every time you need to prove you’re current; much in the same way you don’t have to take a driving test each time your driver’s license expires, once you establish your Salesforce credentials for a given certification, there are far fewer hoops to jump through to keep it. All you have to do is take a short release exam to maintain that credential for each new release. It’s a pretty rational approach that lets us demonstrate we still know our stuff without adding much strain on our already packed schedules.

Once you establish your Salesforce credentials for a given certification, there are far fewer hoops to jump through to keep it.

Salesforce Certification Exam Cycle

As a Salesforce Developer or Admin, you need to take the exam each release cycle, which happens a bit more often than driver’s licenses expire — a new release comes out about three times per year, every four months or so.
Some of the release exams are:

  • Salesforce Certified Administrator Release Exam
  • Salesforce Certified Force.com Developer Release Exam
  • Salesforce Certified Platform App Builder Release Exam
  • Salesforce Certified Platform Developer I Release Exam, and
  • Salesforce Certified Pardot Consultant Release Exam

Administrator Release Exam

If you want to keep your Administrator, Advanced Administrator, Service Cloud Consultant, Sales Cloud Consultant, Community Cloud Consultant, or Field Service Lightning Consultant certification, you’ll need to take the Administrator Release Exam.

Force.com Developer Release Exam

If you want to maintain your certification as a Force.com Developer or Force.com Advanced Developer, you need to take the Force.com Developer Release Exam.

Platform Developer I Release Exam

To hold the Platform App Builder, Application Architect, or System Architect certification, you’ll have to take the Platform App Builder Release Exam. To be a certified Platform Developer I, Platform Developer II, or Application Architect you’ll need to take the Platform Developer I Release Exam.

Marketing Cloud Email Specialist Release Exam

If you are a Certified Marketing Cloud Email Specialist or Consultant, you’ll need to take the Marketing Cloud Email Specialist Release Exam.

Taking the Exam

You must answer about 15 questions in 30 minutes to complete a release exam. It’s an unproctored exam, so you’re allowed to reference whatever literature or online resources you’d like so long as you complete the exam with in the time limit. It’s a good idea to brush up on release notes and watch the videos in Salesforce’s YouTube channel immediately before taking it so that information is fresh in your mind.

In addition to taking regular exams, you’ll also need to pay an annual $100 maintenance fee to keep your certification.

Be sure to take the exams by the deadlines Salesforce.com establishes, which are typically 8 or 9 months before the release deadline. For example:

  • The Summer ’16 Release Exam is due March 24, 2017.
  • The Winter ’17 Release Exam is due July 14, 2017.

Important:

If you miss the deadline or fail the exam three times, your credentials will expire and you will have to take the full exam again, so make sure you know the deadlines and you’re prepared before you take the exam.

Useful Links

The post How to Keep Your Salesforce Certification appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

Every year Soliant has an offsite where all the offices meet in one place; we wrap up our offsite with a volunteer activity. Last year we worked at the Elache Nature Center in Gainesville, GA. This year we returned to Georgia and had about 24 people who stayed an extra day and volunteered at a local non-profit shelter in Buford, GA. Shelters are always in need of food donations, so everyone participated in our volunteer effort by donating pantry items to the Home of Hope-Gwinnett Children’s Shelter. Volunteering after our offsite has become a tradition that allows us to give back to local communities and strengthen our bonds by working together to help others.

Click to view slideshow.

We started the day with an excellent breakfast buffet and then carpooled to Buford, GA. The ride to Buford was quiet in the beginning; I think everyone was tired from all the activities we had at the offsite, which was quite busy. We had many professional development sessions throughout the day and lots of entertaining activities at night, including a talent show and an awards dinner.

The Next Step Towards Independence

As soon as we arrived at the Home of Hope, Bridgette, the Food Services Manager, told us all about their non-profit. It is a residential care facility that provides temporary home and support for homeless children from 0 -17 years old and their mothers. Home of Hope also gives support to girls aging out of the foster-care system. The shelter provides housing, life coaching and educational support to help moms and young ladies to get back on their feet. Their goal is to “not simply to be a place of refuge; we are the next step towards independence.”

Next, Bridgette gave us a tour of the facility. It’s a fantastic place, everything was looking new and clean, you could feel the care of the organization in small the details. They offer individual rooms for families, kid’s playroom, quiet rooms, and a business center where the tenants must commit time during the day to look for jobs.

Getting Down to Work

Home of Hope needed help organizing their storage rooms. Our volunteer group lent a hand for a few hours by sorting and organizing their kitchen/cafeteria, the kids play room, and storage closets. We split into teams, and I was part of the pantry team. The pantry was overflowing with donations, and we could barely enter the pantry because of the boxes. We decided to take everything out, organize by the type of food, and label the shelves. Organizing is one of my favorite things to do, and I had an enthusiastic team that soon found a rhythm to get the task done. It took us a less than a couple of hours to get everything organized.

After we had finished our work, we gathered for our last team lunch of the offsite. The volunteer activity was an excellent way to end our Soliant offsite week, and I’m proud to be part of a team that cares about giving back.

Soliant volunteers with Bridgett from the Home of Hope-Gwinnett Children's Shelter

Soliant volunteers with Bridgett from the Home of Hope

The post Volunteering at Home of Hope – Gwinnett Children’s Shelter appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting
The purpose of this blog is to show you how to add Lightning components to Visualforce pages.  I am assuming that you already have basic knowledge of VF pages and also are able to create a basic Lightning component that you can view via a Lightning app.

Start off by creating a couple new Lightning components and a Lightning app to hold them.  I just used a couple of Lightning components I previously created when learning how to create Lightning components.  helloWorld.cmp (see Figure 1) and helloPlayground.cmp (see Figure 2).  I then added an app to hold them called ‘harnessApp.app’.

helloWorld.cmp

Hello World Lightning component

Figure 1


helloPlayground.cmp

Hello Playground

Figure 2

<aura:application extends="ltng:outApp">
	<c:helloWorld />
	<c:helloPlayground />
</aura:application>

Notice the ‘xtends=”ltng:outApp”’ in the above app.  What this does, is says that this app can be hosted outside of lightning but will continue to use the Salesforce Lightning Design System (SLDS) styling.  You can instead choose to not use the SLDS styling if you use ‘ltng:outAppUnstyled’ instead.

In my VF page, we have a special include for Lighting with:

<apex:includeLightning />

We also need to create a section of the code for the Lightning components to appear in, so a simple one here is:




<div id="lightning" />

It looks empty, but we will see to that with some javascript later.

$Lightning.use("c:harnessApp", function(){});

Here we use the new app that I created.  If you run your page at this point, nothing will happen.  The page requires you to manually tell components where to appear.  Notice the ‘c:’ in the expression.  This refers to the default namespace.  If your org has a different namespace than the default, you will need to change the ‘c’ portion to whatever that is.

Inside the function that we just created, we add some more lines:

$Lightning.createComponent("c:HelloWorld", {}, "lightning", function(cmp){});

This actually reveals the component and places it inside the div with the id of ‘lightning’.  Also, you will notice that it only shows one of the components at this point.  To add in the next component is pretty simple:

$Lightning.createComponent("c: helloPlayground", {}, "lightning", function(cmp){});

If you run it again, you can see both apps now running!

NOTE: There might be a slight delay on the components showing up since they are revealed via javascript that needs to execute.



Looking at Figure 3, you might notice that the ‘Hello World’ is under the ‘Hello Playground’ even though the javascript above adds in hello world first. I could have added them to their own components to control more of where they show up, but when you add new components to be shown to the page, they will prepend the new component in front of the others.

Screenshot of both apps running

Figure 3 – Both apps running.

I made an adjustment to my page so that each one has their own div and I can control better where they show.

<apex:page >
	<apex:includeLightning />

	<div id="helloWorld" />
	<div id="helloPlayground" />

	<script>
		$Lightning.use("c:harnessApp", function()
		{
			$Lightning.createComponent("c:HelloWorld",
			{}, helloWorld", function(cmp){});
			$Lightning.createComponent("c:helloPlayground",
			{}, “helloPlayground", function(cmp){});
		});
	</script>
</apex:page>
Screenshot of the completed page

Figure 4 – Completed VF Page

The post How to Place Lightning Components in Visualforce Pages appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

With the release of version 15 in May 2016, FileMaker introduced a new feature – the Top Call Statistics Log – which tracks up to 25 of the most expensive remote calls that occur during a collection interval.

I created a video on this topic back in May and am following up now with a written blog. The information here is essentially the same as in the video. My motivation is to create a text-based reference, because I find that to be a more useful reference than a video.

Statistics Log Files

Some of the actions that a user takes when working with a file hosted on FileMaker Server are processed entirely client-side. An example is sorting data that has already been downloaded to the client. But most actions will result in one or more remote calls which are processed by the server. Some examples include navigating to a layout, creating a new record, committing a record, and performing a find.

While the large majority of remote calls are initiated by the client, it is possible for FileMaker Server to initiate a remote call to the client. An example of this is when FileMaker Server asks the client for the values of its global fields.

When we talk about “clients”, it is important to realize that this includes server-side scripts, the web publishing engine, and ODBC/JDBC connections in addition to the Pro, Go, and WebDirect clients.

When a solution’s performance is suboptimal, it could be due to a specific action that a user (or a group of users) is taking. Before FileMaker 15, we had a view into remote call activity only at an aggregate level, through the usage and client statistics log files. With the top call stats log, we now gain an additional tool which allows us to view statistics for individual remote calls – the top 25 most expensive ones collected during a specified time interval. Using this log file, we now have a chance at pinpointing specific operations which may be causing degraded performance.

The information stored in the three statistics log files is gathered during a collection interval whose default value is 30 seconds. Each entry in a statistics log file must be viewed from the context of its collection interval. At the end of every interval, the new information is added to the bottom of the log.

Here are the three statistics log files:
Log Filename Information Show for Each Collection Interval
Usage Statistics Stats.log One entry which summarizes information about all of the remote calls, across all files and clients.
Client Statistics ClientStats.log One entry for every client which summarizes information about the remote calls to and from that client *
Top Call Statistics TopCallStats.log Up to 25 entries showing discrete (not summarized) statistics from the most expensive remote calls.

* According to my understanding, the Client Statistics log is supposed to have only one entry per client for every collection interval, but in my testing, I have sometimes seen more than one entry for a client.

Configuring Log Settings

The top call statistics log is enabled in the admin console in the Database Server > Logging area as shown in Figure 1. Once enabled, it will continue to capture information even if the admin console is closed. However, if the Database Server is stopped, the top call statistics log will not automatically re-enable once the Database Server is started up again.

The top call statistics log can also be enabled or disabled using the command line as shown in Figure 2:

  • fmsadmin enable topcallstats -u admin -p pword
  • fmsadmin disable topcallstats -u admin -p pword

Enable top call statistics log in the admin console

Figure 1. Enable top call statistics in the admin console under Database Server > Logging (click image to enlarge).

se the command line to enable/disable top call statistics.

Figure 2. Use the command line to enable/disable top call statistics (click image to enlarge).

In addition to enabling and disabling the log, the admin console Database Server > Logging area is used to specify the duration of the collection interval and the size of the log file. The default values are 30 seconds for the collection interval and 40 MB for the log file.

The log file size setting pertains to all of the log files, but the collection interval duration is only relevant to the three statistics log files: usage, client, and top calls. When the file size is exceeded, the existing log file is renamed by appending “-old” to the file name, and a new log file is created. If a previous “-old” file already existed, it will be deleted.

You can experiment with making the collection interval shorter, but only set it to very short durations (like 1 second) while diagnosing. The client and top call statistics do create additional overhead for the server, so if you are already dealing with a stressed server, there is potential for further performance degradation. And of course the log files will grow in size much more quickly as well. So, this setting should not be kept at very low values indefinitely.

Viewing the Log File

First Row Option

Figure 3. First Row Option (click image to enlarge).

The log file data is stored in a tab-delimited text file with the name TopCallStats.log. For Windows, the default path for all log files is C:\Program Files\FileMaker\FileMaker Server\Logs. The path for Mac servers is /Library/FileMaker Server/Logs/. Unlike with a Mac, the Logs path can be changed on Windows by installing FileMaker Server to a non-default location.

There is no viewer built into the admin console for the top call stats log file, so to view the data, you will need to open it in a text editor or an application such as Excel. You can also drag the file onto the FileMaker Pro icon (for example, on your desktop), which will create a new database file and automatically import the log data into it. If you do so, select the option to interpret the first row as field names (see Figure 3).

Converted file displaying the top call stats .

Figure 4. Converted file displaying the top call stats (click image to enlarge).

Making Sense of the Top Call Stats Log Data

Each line in the log corresponds to a remote call, and each column corresponds to a particular kind of data. Here is the list of all columns followed by a detailed look at each one.

  • Timestamp
  • Start/End Time
  • Total Elapsed
  • Operation
  • Target
  • Network Bytes In/Out
  • Elapsed Time
  • Wait Time
  • I/O Time
  • Client Name

Timestamp – This is the timestamp for the collection interval, not for the remote call. In other words, all of the entries that were collected during the same interval will show the same timestamp value. The timestamps use millisecond precision, and the time zone shown is the same as the server. Sample value: 2016-04-23 10:55:09.486 -0500.

Start Time – This shows the number of seconds (with microsecond precision) from when the Database Server was started until the time the remote call started. Sample value: 191.235598.

End Time – Same as the Start Time, except that this show when the remote call ended. If the remote call was still in progress when the data was collected, this value will be empty.

Total Elapsed – Number of microseconds elapsed for the remote call so far. This is the metric that determines which 25 remote calls were the most expensive ones for a given collection interval. The 25 remote calls are sorted in the log based on the Total Elapsed value, with the largest time at the top. Sample value: 1871.

Elapsed Time – Number of microseconds elapsed for the remote call for the collection interval being reported on. In the log file, Elapsed Time is shown as a column closer to the end of all of the other columns, but I am elaborating on it now, since it conceptually fits in with the Total Elapsed column. Sample value: 1871.

The Total Elapsed and Elapsed Time values will typically be the same, but they will be different for a remote call that began in a previous collection interval. For example, in the accompanying diagram, the entries for remote call B in the second collection interval (at 60 seconds) would show Total Elapsed as 33 seconds and Elapsed Time as 18 seconds (the values would actually be shown in microseconds instead of seconds).

Remote calls diagram

Figure 5. Remote calls diagram.

Operation – This includes the remote call name and, in parenthesis, the client task being performed. The client task is only shown if applicable. For some entries, the client task will also show the percent completed. For example, for a find operation, the value might say “Query (Find)” if the operation completed before the log data was gathered at the end of the collection interval. But if the operation was still in progress, the value might say “Query (Finding 10%)”.

List of all possible remote call names and client tasks:
Remote Calls Client Tasks
  • Adjust Reference Count
  • Build Index
  • Commit Records
  • Compare Modification Counts
  • Create Record
  • Download
  • Download File
  • Download List
  • Download Temporary
  • Download Thumbnail
  • Download With Lock
  • Get Container URL
  • Get DSN List
  • Get File List
  • Get File Size
  • Get Guest Count
  • Get Host Timestamp
  • Lock
  • Lock Finished
  • Login
  • Logout
  • Notify
  • Notify Conflicts
  • ODBC Command
  • ODBC Connect
  • ODBC Query
  • Open
  • Perform Script On Server
  • Query
  • Remove All Locks
  • Request Notification
  • Serialize
  • Transfer Container
  • Unlock
  • Update Table
  • Upgrade Lock
  • Upload
  • Upload Binary Data
  • Upload List
  • Upload With Lock
  • Verify Container
  • Abort
  • Aggregate
  • Build Dependencies
  • Commit
  • Compress File
  • Compute Statistics
  • Consistency Check
  • Copy File
  • Copy Record
  • Count
  • Delete Record Set
  • Delete Records
  • Disk Cache Write
  • Disk Full
  • Disk I/O
  • Export Records
  • Find
  • Find Remote
  • Index
  • Lock Conflict
  • Optimize File
  • Perform Script On Server
  • Process Record List
  • Purge Temporary Files
  • Remove Free Blocks
  • Replace Records
  • Search
  • Skip Index
  • Sort
  • Update Schema
  • URL Data Transfer
  • Verify

Target – This shows the solution element that is being targeted by the remote call operation. See the accompanying tables (below) for some sample values as well as a list of all possible target values. The name of the hosted database file is always shown as the first value; i.e. before the first double colon. The additional information after the first value will be included if it is available. In the example shown, we can see that there is a lock on one or more records in the table whose ID is 138. The ID value is not the internal table ID; it is the BaseTable ID which comes from the XML Database Design Report (DDR). Using a table’s ID instead of its name is done for security reasons. If your table name is “Payroll”, and that name was exposed in the log file, that would leak potentially useful information about your database to a would-be hacker.

Sample values for Operation and Target:
Operation Target
Unlock MyFile
Commit Records (Commit) MyFile::table(138)
Query (Find) MyFile::table(138)::field definitions(1)
Lock MyFile::table(138)::records
List of all possible targets:
Target
  • base directory
  • containers
  • custom menu
  • custom menu set
  • field definitions
  • field index
  • file reference
  • file status
  • font
  • global function
  • globals
  • layout
  • library
  • master record list
  • records
  • relationship
  • script
  • table
  • table occurrence
  • theme
  • value list

Network Bytes In/Out – These two columns show the number of bytes received from and sent to the client. Each entry shows a value that is pertinent to its remote call and for its corresponding collection interval only. Note that if a remote call spans more than one collection interval, it will likely send or receive additional bytes in the subsequent interval(s); i.e. the values will be different in the different collection intervals. Sample value: 57253.

Elapsed Time – The Elapsed Time statistic column is shown following the Network Bytes Out column, but we already covered it a bit earlier in the blog post, so please refer to the detailed explanation there.

Wait Time – Number of microseconds that a remote call spent waiting in the collection interval. An example of why this might happen is that maybe there weren’t any processor cores available at the time or maybe some other remote call had locked the table which this remote call needed access to. Sample value: 1871.

I/O Time – Number of microseconds that a remote call spent in the collection interval reading from and writing to disk. Sample value: 1871.

Client Name – A name or identifier of a client, along with an IP address. If the client is a WebDirect client, it will be made apparent here. If the client is a server side script, the script name will be shown.

Sample client name values:
  • John Smith (Smith Work Mac) [192.168.28.137]
  • Archive Old Records – Admin 1 (FileMaker Script)

How to use the top call stats log?

The top call stats log will give you a better shot at identifying the factors contributing to slow performance. For example, if you have a single table that everyone is writing to or searching against, then you would expect to see a lot of remote calls having to do with managing the locking of that table or the index. Another example: If you receive reports of FileMaker being slow for everyone, and if you spot a single client appearing in the top call stats log much more so than other clients, then you can investigate with that user to see what he or she is doing that is different from other users.

Jon Thatcher did an excellent session at the 2016 DevCon during which he gave several examples of using Top Call Stats to troubleshoot performance issues (starting at around 34:37). A recording of the session is available here: “Under the Hood: Server Performance”.

Here is Jon’s general overview of how to use the three statistics logs to identify causes of performance issues:

  1. First identify the problem resource (CPU, RAM, disk, or network) using the Server Statistics log or an OS tool like (OS X) Activity Monitor or (Windows) Task Manager or PerfMon. The server statistics log can show spikes (for example, long elapsed time), but not which client caused them.
  2. Next, identify the problem client(s), if any, with Client Statistics. This log can show which client caused the spike, but not which operation caused it.
  3. Finally, use Top Call Statistics to identify the problem operation(s).

References

The post FileMaker Server Top Call Statistics Logging appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

Nowadays, consumers have more information available to them online, resulting in new buying behavior that changed the sales process in recent years. These changes are influencing marketing and sales teams to combine tools and work together to deliver an effective sales experience.

Tools like Salesforce and Pardot are embracing new buying behaviors and helping marketing and sales teams to sell smarter.

Together, these tools are innovating and leveraging how businesses engage with customers in a cohesive, personalized selling process that meets consumers’ current needs. One of the most exciting ways to leverage your Salesforce and Pardot tool is exploring the Lightning Experience.

The Engagement History Lightning component is a custom component that displays Pardot prospect activities in Salesforce, providing sales representatives data of their prospect interactions and the ability to respond to these actions fast but in a personable way. Here are some highlights:

  • Explore Prospect Activity History — The prospect’s browsing history; how many website visits, which pages were viewed, and which content was downloaded. All the valuable information that helps you better understand prospect needs.
  • Simpler Interface — Replacing the Visualforce page with the Engagement History Lightning component will give you a simpler experience and provide information in a unique way to have a more customized view and personalized conversation with your prospects.
  • Automatic Notifications — Sales reps are automatically notified when a prospect shows interest in their product. The notification enables them to manage leads with relevant content, and to act more swiftly and directly within Salesforce.

This Engagement History Lightning component is supported in the Lightning App Builder (and on any other app that allows the addition of custom Lightning components) conveniently making all the information available on the go.

You can set up this Lightning Experience enhancement by editing a record page or creating a new page from the Lightning App Builder. It is important to note that My Domain must be enabled in your Salesforce org to add the Engagement History component onto lead or contact pages, and configuring permissions might be needed. If you need step-by-step instructions on how to add components to Lightning Experience, read the Salesforce “Configure Lightning Experience Record Pages”  article for more information.

Take full advantage of your tools to improve your customer’s experience and enable your sales team. With just a few clicks, you can add the Lightning component to your company’s records page, and help your sales reps fully understand the buying behaviors of your customer base and work smarter.

Sources

The post Lightning Experience for Pardot appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

Once Thanksgiving is over, it seems like the last month of the year kicks into high gear. At Soliant, each of our offices holds a holiday dinner where everyone gets together for good food, conversation, and gift exchange.

Holiday Cheer in California


The California team started with a dinner at West Park Bistro in San Carlos, CA. On the night of our dinner, nearby streets were closed off in preparation for the “Night of Holiday Lights” lighting festivities scheduled to commence that evening. What normally is an easy parking situation, turned into a “Where’s Waldo” version for parking spaces. By the time everyone arrived at the restaurant, we were all ready for the meal to start, post haste!

Our private room was also where all the wine is kept. We were disciplined and did not grab any of the wine from the racks 😉 When it came time for our White Elephant gift exchange, there were a couple of sought after gifts that reached the limit on times stolen. We had a boisterous and fun close to our dinner.

Holiday Dinner in Pennsylvania


The next holiday dinner was at L’angolo Restaurant in south Philadelphia, where our Pennsylvania team gathered for a delicious Italian meal. When I spoke with Managing Director, Craig Stabler, about their party, he said they did a Yankee Swap. I was curious if it was the same thing as a White Elephant exchange and found out that it is — it goes by different names, such as Yankee Swap, Dirty Santa, and so on.

Everyone enjoyed the delicious food and rather than stealing gifts when it came time for their Yankee Swap, they all opened them at the same time. No one had to try hiding their gift under a chair to prevent it from getting stolen.

Holiday Celebration in Chicago

Our final holiday dinner was held at Formento’s, which is two blocks away from our Chicago headquarters where everyone enjoyed scrumptious Italian food. I’m sure with the extremely cold temperatures, that short walk from the office was much appreciated.

The Chicago team does a White Elephant gift exchange, but with an added twist. Six years ago, someone did a “re-gift” by bringing one of our gray, button down Soliant shirts as their gift. The next year, the person that ended up with the shirt brought it back, but with embellishments on the epaulettes. Thus, a tradition was born. Whoever ends up with the Soliant shirt at the end of the gift exchange must bring it back to the next year’s holiday dinner with a new embellishment.

Previous embellishments have included, fancy epaulettes, color piping, a light, silhouette patches, fleur de lis, and a hat. This year, Dawn Heady brought the shirt back with even more lights, including a light up tie! The Chicago folks have brought their gift exchange game up to another level.

As we close the year, and begin our holiday break, I am so thankful for the fantastically talented, smart, and witty people that I get to work and interact with every day.

Happy holidays, everyone!

The post Happy Holidays at Soliant appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

FileMaker Cloud running in Amazon Web Services (AWS) delivers tremendous value and cost savings over owning and operating a traditional on-premise server. However, there are still costs involved and it is a good idea to be mindful of those costs. Indeed, tracking costs is part of a well architected application.

This also applies to the standard version of FileMaker Server running on an AWS EC2 instance, so lessons learned here will also be applicable in the greater context across all AWS services.

I especially recommend trying out FileMaker Cloud and AWS Services in general, which have free trial and free tier services, respectively. Just remember, the free trial has a limit, so either continue will annual licensing for FileMaker or stay within the threshold when evaluating the services you will need.

Minding the Till

CloudWatch is an AWS service that offers the ability to, among other things, set Billing Alarms that let you know when you have exceeded spending thresholds. In the age of virtual servers where everything is scriptable, it makes good sense to take advantage of this feature to avoid unexpected charges when you get a monthly bill. It is also easy to set up, so why not?

Step 1 – Enable Billing Alerts

First, you will need to do this from the “root” account, which is the account you first created when setting up your AWS account. If you only use one account, then your account is the root account.

  • Log in to the AWS console and open the Billing and Cost Management dashboard.
  • Select “Preferences” from the left-hand navigation (see Figure 1).
  • Check the box next to “Receive Billing Alerts” to enable the service.
  • Click “Save preferences” to save changes.

Screenshot of Preferene with the "Receive Billing Alerts" checkbox marked

Figure 1. Check the “Receive Billing Alerts” box (click image to enlarge).

Step 2 – Create an Alarm

Once you have enabled billing alerts, you can create a billing alert in CloudWatch. Open the CloudWatch console by opening the Services menu and selecting CloudWatch from the Management Tools section. Make sure you are in the US East region. This is the region that billing data is stored in, regardless of what worldwide region you have services running in.

  • Choose “Alarms”.
  • Click on “Create Alarm”.
  • Then click on “Billing Metrics” to select that category (see Figure 2).
  • Check the box on the line with “USD” under the Total Estimated Charge section.
  • Click “Next” to continue.
  • Give the alarm a name, like “Billing” (see Figure 3), and set the threshold you would like to be notified at. For example, whenever charges exceed $100 a month.

Screenshot of the Create Alarm setup

Figure 2. Create an Alarm (click image to enlarge).

Screenshot of setting up the Alarm Threshold

Figure 3. Set the Alarm Threshold (click image to enlarge).

Step 3 – Specify Alert Recipients

Next we need to set up a distribution list of those who will get notified in the Actions section of this dialog (see Figure 4).

  • Click on “New list” next to the “Send notification to” drop down list.
  • Then you can add email addresses to the “Email list”.
  • Separate multiple email addresses with commas.
  • Make sure to give your notification list a name.
  • Click on “Create Alert” to finish.

Screenshot of setting of Actions for an Alarm

Figure 4. Define the Alert actions (click image to enlarge).

The recipients will receive an email to validate their email addresses. Once confirmed, the recipients will start receiving alerts.

AWS Simple Notification Service

You may not have been aware of this, but you created an SNS Topic in the preceding steps. Simple Notification Service (SNS) is another very useful AWS service used to send various kinds of notifications. In this case, the notification is in the form of emails, but it could also include HTTP endpoints or text messages.

If you are interested to see details about the Topic you created, you can navigate to the SNS dashboard by opening the Services menu and selecting SNS from the Messages section. From there click on Topics to see the distribution list we created above. Click on the link for the ARN (Amazon Resource Name) to view the list of subscriptions to this topic. You will see the email addresses you entered above and their subscription status.

If you ever need to update the billing alert recipient list, you can do so here in the SNS Topic.

Cost Optimization

Cost optimization is one pillar of a well architected framework and an essential part of a deployment strategy. Billing alerts can help with this objective. They are easy to set up and configure, so I would recommend utilizing this service to aid in a successful FileMaker Cloud (or FileMaker Server) AWS deployment.

Be sure to read these other AWS related posts to learn more:

 

The post Billing Alerts for FileMaker Cloud appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

Moving from Vagrant to Docker can be a daunting idea.  I’ve personally been putting it off for a long time, but since I discovered that Docker had released a “native” OS X client I decided it was finally time to give it a go.  I’ve been using Vagrant for years to spin up a unique development environment for each of the client projects that I work on and it works very well, but does have some shortcomings that I was hoping that Docker would alleviate. I’ll tell you now, the transition to Docker was not as difficult as I had built it up to be in my mind.

Let’s start off with the basics of Docker and how it differs from Vagrant.  Docker is a container based solution, where you build individual containers for each of the services you require for your application.  What does this mean practically?  Well, if you’re familiar with Vagrant you will know that Vagrant helps you create one large monolithic VM and installs and configures (through configuration management tools like Puppet or Chef) everything that your Application needs.  This means that for each project, you have a full stack VM running which is very resource intensive.  Docker on the other hand can run only the services you need by utilizing containers.

Docker Containers

So what are Docker containers?  Well, if we’re developing a PHP application, there’s a few things that we will need.  We need an application server to run PHP, a web server (like Apache or nginx) to serve our code, and a database server to run our MySQL instance.  In Vagrant, I would have built an Ubuntu VM and had Puppet install and configure these services on that machine.  Docker allows you to separate those services and run each service in its own container which is much more lightweight than a full VM.  Docker then provides networking between those containers to allow them to talk to each other.

NOTE: In my example below I’m going to combine the PHP service and the Apache service into one container for simplicity and since logically there isn’t a compelling reason to separate them.

One Host to Rule Them All

At first running multiple containers seems like it would be MORE resource intensive than Vagrant, which only runs a single VM.  In my example, I’m now running multiple containers where I only had to run a single Vagrant VM… how is this a better solution?  Well, the way that Docker implements its containers makes it much more efficient than an entire VM.

Docker at its heart runs on a single, very slimmed down host machine (on OS X).  For the purpose of this article, you can think of Docker as a VM running on your machine and each container that you instantiate runs on the VM and gets its own sandbox to access necessary resources and separate it from other processes.  This is a very simplistic explanation of how Docker works and if you’re interested in a more in-depth explanation, Docker provides a fairly thorough overview here: https://docs.docker.com/engine/understanding-docker/

Docker Images

Now that we know what Docker containers are, we need to understand how they’re created.  As you may have guessed from the header above, you create containers using Docker images.  A Docker image is defined in a file called a ‘Dockerfile’, which is very similar in function to a Vagrantfile.  The Dockerfile simply defines what your Image should do.  Similar to how an Object is an instance of a Class, a Docker Container is an instance of a Docker Image. Like an object, Docker Images are also extensible and re-usable.  So a single MySQL image can be used to spin up database service containers on 5 different projects.
You can create your own Docker Images from scratch or you can use and extend any of the thousands of images available at https://hub.docker.com/

Image Extensibility

As I noted above, Docker Images are extensible, meaning that you can use an existing image and add your own customizations on top of it.  In the example below, I found an image on the Docker Hub ‘eboraas/apache-php’ that was very close to what I needed with just a couple tweaks.  One of the big advantages of Docker is that you are able to pull an image and extend it to make your own customizations.  This means that if the base image changes, you will automatically get those changes next time you run your docker image without further action on your part.

Docker Compose

When you install Docker on OS X, you’ll get a tool called Docker Compose.  Docker compose is a tool for defining and running applications with multiple Docker containers.  So instead of having to individually start all of your containers on the command line each with their own parameters, it allows you to define those instructions in a YAML file and run one command to bring up all the containers.

Docker Compose is also what will allow your Docker containers to talk to each other.  After all, once your web server container is up and running it will need to talk to your database server which lives in its own container.  Docker Compose will create a network for all your containers to join so that they have a way to communicate with each other.  You will see an example of this in our setup below.

Docker Development Application Setup

All of this Docker stuff sounds pretty cool, right?  So let’s see a practical example of setting up a typical PHP development environment using Docker.  This is a real world example of how I set up my local dev environment for a new client with an existing code base.

Install Docker

The first thing you’re going to want to do is install Docker.  I’m not going to walk through all the steps here as Docker provides a perfectly good guide.  Just follow along the steps here: https://docs.docker.com/docker-for-mac/

Now that you’ve got Docker installed and running, we can go ahead and open up a terminal and start creating the containers that we’ll need!

MySQL Container

Now, typically I would start with my web server and once that is up and running I would worry about my database.  In this case the database is going to be simpler (since I’ll need to do some tweaking on the web server image) so we’ll start with the easier one and work our way up.  I’m going to use the official MySQL image from the Docker Hub: https://hub.docker.com/_/mysql/

You can get this image by running:

docker pull mysql

After pulling the mysql image you should be able to type ‘docker images’ and it will show up in the list:

docker-images

Docker Images

Now we pulled the image to our local machine and we can then run it with this command:

docker run mysql

This will create a container from the image with no configuration options at all, just a vanilla MySQL server instance.  This is not super useful for us, so let’s go ahead and `Ctrl + C` to stop that container and we’ll take it one step further with this command:

docker run -p 3306:3306 --name my-mysql -e MYSQL_ROOT_PASSWORD=1234 -d mysql:5.6

We’re now passing in a handful of optional parameters to our run command which do the following:

  • `-p 3306:3306` – This option is for port forwarding. We’re telling the container to forward its port 3306 to port 3306 on our local machine (so that we can access mysql locally).
  • `–name my-mysql` – This is telling Docker what to name the container. If you do not provide this, Docker will just assign a randomly generated name which can be hard to remember/type (like when I first did this and it named my container `determined_ptolemy`)
  • `-e MYSQL_ROOT_PASSWORD=1234` – Here we are setting an Environment variable, in this case that the root password for the MySQL server should be ‘1234’.
  • `-d` – This option tells Docker to background the container, so that it doesn’t sit in the foreground of your terminal window.
  • `mysql:5.6` – This is the image that we want to use, with a specified tag. In this case I want version 5.6 so I specified it here.  If no tag is specified it will just use latest.

After you’ve run this command, you can run ‘docker ps’ and it will show you the list of your running containers (if you do ‘docker ps –a’ instead, it will show all containers – not just running ones).

This is kind of a clunky command to have to remember and type every time you want to bring up your MySQL instance.  In addition, bringing up the container in this way forwards the 3306 port to your local machine, but doesn’t give it an interface to interact with other containers.  But no need to worry this is where Docker Compose is going to come in handy.

For now, let’s just stop and remove our container and we’ll use it again later with docker-compose.  The following commands will stop and remove the container you just created (but the image will not be deleted):

docker stop my-mysql
docker rm my-mysql
NOTE: Explicitly pulling the image is not required, you can simply do `docker run mysql` and it will pull the image and then run it, but we’re explicitly pulling just for the purpose of demonstration.

Apache/PHP Container

I’ve searched on https://hub.docker.com and found a suitable Apache image that also happens to include PHP:  https://hub.docker.com/r/eboraas/apache-php/.  Two birds with one stone, great!

Now, this image is very close to what I need but there’s a couple of things missing that my application requires.  First of all, I need the php5-mcrypt extension installed.  This application also has an ‘.htaccess’ file that does URL rewriting, so I need to set ‘AllowOverride All’ in the Apache config.  So, I’m going to create my own image that extends the ‘eboraas/apache-php’ image and makes those couple changes.  To create your own image, you’ll need to first create a Dockerfile.  In the root of your project go ahead and create a file named ‘Dockerfile’ and insert this content:

FROM eboraas/apache-php
COPY docker-config/allowoverride.conf /etc/apache2/conf.d/

RUN apt-get update && apt-get -y install php5-mcrypt && apt-get clean && rm -rf /var/lib/apt/lists/*


Let’s go through this line-by-line:
  1. We use ‘FROM’ to denote what image we are extending. Docker will use this as the base image and add on our other commands
  2. Tells Docker to ‘COPY’ the ‘docker-config/allowoverride.conf’ file from my local machine to ‘/etc/apache2/conf.d’ in the container
  3. Uses ‘RUN’ to run a command in the container that updates apt and installs php5-mcrypt and then cleans up after itself.

Before this will work, we need to actually create the file we referred to in line 2 of the Dockerfile.  So create a folder named ‘docker-config’ and a file inside of that folder called ‘allowoverride.conf’ with this content:

<Directory "/var/www/html">
AllowOverride All
</Directory>

The following commands do not need to be executed for this tutorial, they are just for example!  If you do run them, just be sure to stop the container and remove it before moving on.

At this point, we could build and run our customized image:

docker build -t nick/apache-php56 .

This will build the image described in our Dockerfile and name it ‘nick/apache-php56′.  We could then run our custom image with:

docker run -p 8080:80 -p 8443:443 -v /my/project/dir/:/var/www/html/ -d nick/apache-php56

The only new tag in this is:

  • `-v /my/project/dir/:/var/www/html/` – This is to sync a volume to the container. This will sync the /my/project/dir on the local machine to /var/www/html on the container.

Docker Compose

Instead of doing the complicated ‘docker run […]’ commands manually, we’re going to go ahead and automate the process so that we can bring up all of our application containers with one simple command!  The command that I’m referring to is ‘docker-compose’, and it gives you a way to take all of those parameters that we tacked onto the ‘docker run’ command and put them into a YAML configuration file.  Let’s dive in.

Create a file called ‘docker-compose.yml’ (on the same level as your Dockerfile) and insert this content:

version: '2'
services:
  web:
    build: .
    container_name: my-web
    ports:
      - "8080:80"
      - "8443:443"
    volumes:
      - .:/var/www/html
     links:
       - mysql
  mysql:
    image: mysql:5.6
    container_name: my-mysql
    ports:
      - "3306:3306"
    environment:
      MYSQL_ROOT_PASSWORD: 1234

This YAML config defines two different containers and all of the parameters that we want when they’re run.  The ‘web’ container tells it to ‘build: .’, which will cause it to look for our Dockerfile and then build the custom image that we made earlier.  Then when it creates the container it will forward our ports for us and link our local directory to ‘/var/www/html’ on the container.  The ‘mysql’ container doesn’t get built, it just refers to the image that we pulled earlier from the Docker Hub, but it still sets all of the parameters for us.

Once this file is created you can bring up all your containers with:

docker-compose up -d

Using Your Environment

If you’ve followed along, you should be able to run `docker ps` and see both of your containers up and running.  Since we forwarded port 80 on our web container to port 8080 locally, we can visit ‘http://localhost:8080’ in our browser and be served the index.php file that is located in the same directory as the docker-compose.yml file.

I can also connect to the MySQL server from my local machine (since we forwarded port 3306) by using my local MySQL client:

mysql –h 127.0.0.1 –u root –p1234
NOTE: You have to use the loopback address instead of localhost to avoid socket errors.

But how do we configure our web application to talk to the MySQL server?  This is one of the beautiful things about docker-compose.  In the ‘docker-compose.yml’ file you can see that we defined two things under services: mysql and web.  By default, docker-compose will create a single network for the app defined in your YAML file.  Each container defined under a service joins the default network and is reachable and discoverable by other containers on the network.  So when we defined ‘mysql’ and ‘web’ as services, docker-compose created the containers and had them join the same network under the hostnames ‘mysql’ and ‘web’.  So in my web application’s config file where I define the database connection parameters, I can do the following:

define('DB_DRIVER', 'mysqli');
define('DB_HOSTNAME', 'mysql');
define('DB_USERNAME', 'root');
define('DB_PASSWORD', '1234');
define('DB_DATABASE', 'dbname');

As you can see, all I have to put for my hostname is ‘mysql’ since that is what the database container is named on the Docker network that both containers are connected to.

Conclusions

Now I’ll circle back to my original comparison of Vagrant to Docker.  In my experience so far with Docker, I believe it to be better than Vagrant in almost every aspect I can think of.  Docker uses less resources: compared to running a full stack VM, these containers are so lightweight that I can actually feel the performance difference on my laptop.  Docker is faster to spin up environments: doing a ‘vagrant up –provision‘ for the first time would often take in excess of 15 minutes to complete whereas the ‘docker-compose up -d‘ that we just ran took a matter of seconds. Docker is easier to configure: what would have taken me a long time to write ruby scripts (or generate them with Puphpet) for Vagrant took no time at all to extend a Docker image and add on a few simple commands.

Hopefully this article was helpful for you in exploring what Docker has to offer.  Docker also has extensive and detailed documentation available online at: https://docs.docker.com/

If you still don’t feel ready to dive right in, it may be helpful to run through the “Get Started with Docker” tutorial that Docker provides: https://docs.docker.com/engine/getstarted/

The post A PHP Developer’s Transition from Vagrant to Docker appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting


While small and incremental deployments of features to a Salesforce production org are best practice, there are times when multiple, or large areas of functionality must be released simultaneously. Accordingly, a large-scale Salesforce deployment can invoke a high degree of ambivalence among the team involved in its preparation. On one hand, there should be a great degree of excitement. Chances are the functionality you are preparing to implement will alleviate pain points in your existing org, or perhaps greatly simplify existing workflows. On the contrary, it’s also quite normal to feel a degree of anxiety. Large-scale Salesforce deployments merit significant planning and attention in order to ensure a successful rollout. In those situations, proper steps should be taken to minimize disruption of the production environment and operation. With these practices in place, you can help to ensure that any deployment is truly successful.


Salesforce Deployment

Large-scale Salesforce deployments merit significant planning and attention in order to ensure a successful rollout.

  • Thorough end-to-end testing

    Often times, end-to-end testing may be neglected in favor of unit tests, to focus on specific details. Ensuring that proper end-to-tend testing of the features entailed in your deployment has been conducted should make you feel much more comfortable about user experience post-deployment. For added benefit and confidence, this testing should be completed by the impacted user groups.

  • Testing corner and edge cases

    Salesforce deployments require that code being deployed to production have tests that provide 75% code coverage. While this is often covered by granular, code-based unit tests, applications that are business-oriented should also be subjected to efficient, yet thorough end-to-end testing. This is to ensure that the integrity of complex business processes is maintained. These tests are typically conducted by users, rather than code, which allows for the detection of potential user-experience issues. Accordingly, upon ensuring that proper end-to-end testing is conducted, you should feel much more comfortable about user experience post-deployment.

  • Testing in a full sandbox

    The Salesforce platform’s multi-tenant architecture means that there are significant limits that must be accounted for when developing custom applications. Some of these limits can only be tested with large amounts of data. As such, it is invaluable to ensure that your user acceptance testing is conducted in a full sandbox environment. This is particularly important, as it is the only environment which supports performance and load testing. Moreover, it allows your testing environment to be a complete replica of your production org – encompassing all data (including metadata), object records and apps. While the cost of a full sandbox may make your team hesitant, it is entirely justified with the invaluable test coverage provided. This in turn greatly lowers the risk of post-deployment issues, and accordingly results in saving the time and costs associated with encountering such issues.

  • Making a copy of the existing production environment, if applicable

    Version control tools, such as Git and Subversion, are an excellent way of capturing the state of an org’s codebase through each release. If you do not have a version control system in place, having a backup copy of the existing production environment, through a sandbox refresh prior to deployment, allows for the capability of swiftly rolling back to the previous system in the event of a critical deployment issue. Additionally, you should be sure to schedule weekly Organization Data Exports to ensure that all of your Org data is backed up on a consistent basis.

  • Ensure resources are on standby for resolving issues that arise

    While you certainly want to feel confident that your deployment will go off without a hitch, it’s invaluable to have resources readily available to handle any issues that are reported. To take things a step further, it is even more beneficial to proactively discuss a triaging plan with your team – such that you know precisely who would handle different types of issues.

  • Establishing formal go/no go plan prior to the release, and setting a firm timeframe for making that decision

    When initially completing a deployment plan, one of the most crucial dates to set is when to make a formal “go/no go” decision with the team. This should be assessed in a meeting that includes all parties involved in the deployment. Prior to this meeting, it’s imperative to outline all facets that should be taken into consideration, separating the truly critical components from areas that can be refined beyond the designated “go/no go” date, or potentially after deployment.

There are also a few additional steps that are important to consider. You’ll want to develop some comprehensive communication to be distributed to the user base, detailing the new functionality. It’s also greatly beneficial to offer any training that may be necessary. Finally, you’ll of course want both your development team and business stakeholders to verify the changes in production upon deployment.

It is inevitable to feel some of the inherent anxiety that comes along with a large-scale deployment. However, upon following the practices outlined above, your team should feel truly confident that you have comprehensively covered all areas and are headed towards another successful release.

The post Preparing for a Large-scale Salesforce Deployment appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

The list data type is rather versatile, and its use is essential in many programmatic solutions on the Salesforce platform. However, there are some scenarios when lists alone do not provide the most elegant solution. One example is routing the assignment of accounts based on each user’s current capacity.

Suppose we want to assign the oldest unassigned account to a user at the moment when a new account is entered into Salesforce. When working with these accounts, we might want to order them by received date, with the first entry containing the oldest date. How can we design a routing tool so that the next account to assign is stored at the front of a list?

An account queue, ordered by received date

Figure 1. An account queue, ordered by received date (click image to enlarge).

Queue Abstract Data Type

The queue abstract data type is well suited for this type of problem. Our first step should be to define the Queue interface and what methods we want to include:

public interface Queue{
	//returns the number of entries in the queue
	Integer size();

	//returns true if there are no entries in the queue
	boolean isEmpty();

	//places the record at the end of the queue
	void enqueue(SObject o);

	//returns the entry at the front of the queue but does not remove it
	SObject first();

	//returns and removes the entry at the front of the queue
	SObject dequeue();
}

Next we need to implement this interface for accounts:

public class AccountQueue implements Queue{

	private List<Account> accounts;

	//default constructor
	public AccountQueue(){
		this.accounts = new List<Account>();
	}

	//returns the number of accounts in the queue
	public Integer size(){
		return accounts.size();
	}

	//returns true if there are no accounts in the queue
	public boolean isEmpty(){
		return accounts.isEmpty();
	}

	//places the account at the end of the queue
	public void enqueue(SObject o){
		Account newAccount = (Account) o;
		accounts.add(newAccount);
	}

	//returns the account at the front of the queue
	public Account first(){
		if(isEmpty()){
			return null;
		}

		return accounts.get(0);
	}

	//returns and removes the account at the front of the queue
	public Account dequeue(){
		if(isEmpty()){
			return null;
		}

		Account firstAccount = accounts.get(0);
		accounts.remove(0);
		return firstAccount;
	}
}

 

On the Account object, we should create two custom fields. The first is called Assigned, which is a lookup to the User object. The second is the Received Date which is a date field. On the User object, we can create a number field called Capacity, which will tell us to how many more applications a User can be assigned. Once this number reaches zero, we should not assign any more applications to that particular user.

In order for this process to occur when an account is inserted, we will need an Account trigger:

trigger Account on Account (before insert) {
	if(trigger.isBefore && trigger.isInsert){
		new AccountTriggerHandler().beforeInsert(trigger.new);
	}
}

Here is the trigger handler:

public class AccountTriggerHandler {

	//list of accounts that are available for assignment to a User
	private List<Account> unassignedAccounts {
		get{
			if(unassignedAccounts == null){
				unassignedAccounts = [SELECT ID,
				Name,
				Received_Date__c
				FROM Account
				WHERE Received_Date__c != null AND Assigned__c = null ORDER BY Received_Date__c];
			}

			return unassignedAccounts;
		}

		private set;
	}

	//Account queue where the account at the front of the queue has the oldest received date
	private AccountQueue unassignedAccountQueue {
		get{
			if(unassignedAccountQueue == null){
				unassignedAccountQueue = new AccountQueue();
				for(Account a : unassignedAccounts){
					unassignedAccountQueue.enqueue(a);
				}

			}

			return unassignedAccountQueue;
		}

		private set;
	}

	//Map of users that are able to receive assigned applications
	private Map<ID,User> userMap {
		get{
			if(userMap == null){
				userMap = new Map<ID,User>([SELECT ID, Capacity__c FROM User WHERE Capacity__c != null]);
			}
			return userMap;
		}
		private set;
	}

	public void beforeInsert(List<Account> accountList){
		//obtain the number of accounts in the trigger
		Integer numberOfAccountsToAssign = accountList.size();

		//hold a list of accounts that will be assigned to users
		List<Account> accountsToAssign = new List<Account>();

		for(Integer i = 0; i < numberOfAccountsToAssign; i++){
			//obtain the id of the next user that can receive an application
			ID userIDNextToAssign = getNextAssignedUser();

			//reduce that user's capacity by 1
			reduceCapacity(userIDNextToAssign);

			//determine the next account that is to be assigned
			Account unassignedAccount = unassignedAccountQueue.dequeue();

			//if there were any accounts remaining in the queue, assign that account
			if(unassignedAccount != null){
				unassignedAccount.Assigned__c = userIDNextToAssign;
				accountsToAssign.add(unassignedAccount);
			}
		}

		//update unassigned accounts
		update accountsToAssign;

		//update the user records
		update userMap.values();

	}

	//return the id of the user that will be assigned to the next available account
	private ID getNextAssignedUser(){
		ID largestCapacityUserID;

		//find the user id of the largest capacity user
		Integer maxCapacity = 0;

		for(ID userID : userMap.keySet()){
			Integer userCapacity = (Integer) userMap.get(userID).Capacity__c;
			if(maxCapacity < userCapacity && userCapacity > 0){
				maxCapacity = userCapacity;
				largestCapacityUserID = userID;
			}
		}

		return largestCapacityUserID;
	}

	//recude the capacity of the user with id userID by 1
	private Map<ID,User> reduceCapacity(ID userID){
		//decrease the capacity of that user by 1
		if(userID != null){
			User usr = userMap.get(userID);
			usr.Capacity__c = usr.Capacity__c - 1;
		}

		return userMap;
	}
}

The trigger fires when a new account is entered into Salesforce, then searches for the account with the oldest received date, and assigns it to the user with the highest capacity. To demonstrate, we can set one user to have a capacity of 1, and a second user to have a capacity of 2. If we insert three accounts, then the three accounts with the oldest received date will be distributed between both users.

Here are the existing accounts before we insert the new accounts:

Figure 2. State of existing accounts before the new ones are inserted.

Figure 2. State of existing accounts before the new ones are inserted (click image to enlarge).

Here are the assignments after we add the new accounts:

Figure 3. The older accounts have now been assigned after inserting the new ones

Figure 3. The older accounts have now been assigned after inserting the new ones (click image to enlarge).

Since three accounts were inserted into Salesforce, we needed to assign three accounts.

  • Mario had a capacity of 2, so he was assigned the first account in the queue.
  • Next, Taylor, who had a capacity of 1, was assigned an account. He was then at a full capacity of 0.
  • Mario, now at a capacity of 1, received the next account in the queue.

There are some alternative ways to approach this problem using only lists, but using the Queue interface simplifies the implementation. One approach could have been to reverse the order of the unassigned accounts, starting with the newest account as the first item, and the oldest as the last. This might be unintuitive, and would require the developer to store or compute the size of the list in order to access the last element. Another approach might have been to keep the same order as the queue solution, but to simply remove the element from the front of the list using the remove(index) function. The queue implementation abstracts this process, and removes the requirement to continually to check if the list is empty, as the dequeue method already provides that functionality.

The queue abstract data type is a natural fit for any first in first out business requirement. Queues can also be extended to other objects in Salesforce, rather than just the Account object. Queues and other abstract data types can provide templates for solutions to many programming challenges and Salesforce projects are no exception.

The post Using the Queue Abstract Data Type to Assign Records to Users appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

The Web Viewer Integrations Library contains web integrations that enables you to extend the functionality in your custom application. Jeremy Brown introduces the library, talks about its structure, playing with a web integration, and implementing a web integration with your custom FileMaker application.

Introducing the Web Viewer Integrations Library

December 12, 2016 – The Web Viewer in FileMaker, allows us to build deeper functionality by integrating web libraries into a custom app. Jeremy Brown introduces the Web Viewer Integrations Library free for your use.

Data Structure of the Web Viewer Integrations Library

December 12 – 2016 – The Web Viewer Integrations Library is set up using best practices found in web development. Jeremy Brown explains how this file is set up and discusses the advantages and disadvantages.

Getting to Know the Web Viewer Integrations Library

December 12, 2016 – The Web Viewer Integrations library contains 22 integrations available for use in your custom apps. Jeremy Brown walks through the file and explains its features.

Playing with a Web Viewer Integration

December 12, 2016 – The Web Viewer Integrations Library provides a safe playground to learn about how to integrate and make changes to the code. Jeremy Brown leads you through the features and provides an example.

Implementing a Web Viewer Integration

December 12, 2016 – In 30 minutes you can fully integrate a web library into your custom FileMaker application. Jeremy Brown shows you how.

The post Web Viewer Integrations Library Playlist appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

This is the third in the series of videos about the Web Viewer Integrations Library.

The Web Viewer Integrations library contains 22 integrations available for use in your custom apps. Jeremy Brown walks through the file and explains its features.

The post Getting to Know the Web Viewer Integrations Library appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

This is the second in the series of videos about the Web Viewer Integrations Library.

The Web Viewer Integrations Library is set up using best practices found in web development. Jeremy Brown explains how this file is set up and discusses the advantages and disadvantages.

The post Data Structure of the Web Viewer Integrations Library appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

This is the first in the series of videos about the Web Viewer Integrations Library.

The Web Viewer in FileMaker, allows you to build deeper functionality by integrating web libraries into a custom application. In this video, Jeremy Brown introduces the Web Viewer Integrations Library free for your use.

The post Introducing the Web Viewer Integrations Library appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

This is the first in the series of posts about the Web Viewer Integrations Library.

I seem to have developed an obsession with the web viewer in FileMaker. That fact is surprising because it was only three years ago that I knew nothing about the object. Back then, I decided that, while continuing to learn FileMaker, I’d start to learn how to do more in my custom apps with the the languages of the web. Every chance I got, I’d try to implement a Google map or a chart either into a project or in a simple demo file. At first it was painfully slow; trying to get a chart into FileMaker via the web viewer was too complex and was overkill in the time spent on the implementation. Never one to back away from a challenge, I pushed through, and eventually, it became much easier. The object became more fun to work with, and I pushed myself to try and do more with it.

I’m grateful to people here at Soliant for helping with the inspiration. My colleague Mike Duncan has written about it in his “Getting Started with Javascript and FileMaker” post, and so has Ross Johnson in his post, “Drag and Drop jQuey Interface for Exploring Records to CSV“. I’ve explored what could be done with it briefly as well. Suffice it to say, we at Soliant like the web viewer.

It occurred to me one day: why not put all of the integrations together in one library? I had put together many sample files but they were all over my computer and, in some cases, I had forgotten the password to open it up. This idea, putting all the integrations together, led me to creating the Web Viewer Integrations library, which I present to you fully open and unlocked.

My goal for the library was twofold:

  • First, the collection should gather together integrations that are useful in FileMaker and that are useful for normal solutions to use cases presented by the client.
  • Second, it needed to be standardized as much as possible and easily extracted from this file into a client app.
Screenshot of the Web Viewer Integrations Library menu

Web Viewer Integrations Library (click image to enlarge).

I set about that by scouring the internet for those integrations that are useful to FileMaker use cases: filling in an address form, charting, mapping, and so forth. I got ideas from the work I’ve done in projects as well as use cases that other people were working on. While on the hunt, I allowed myself the time to play a bit with less-useful integrations. What is contained in this library is probably 85% extremely-useful and 15% less-but-still useful.

To achieve the second goal, I took up a number of best practices that have been discussed between myself and various people here at Soliant. In the next post, we’ll take a look at the way this file is set up and have a brief discussion on those best practices.

So this library, version 1.0, is the result of my scouring the internet, wrestling with the structure, and lots of testing to get these 22 integrations to work well inside a web viewer and with FileMaker data. It is free and open for you to use and to manipulate and export into your own custom app.

My hope is that anyone can learn how to use the web viewer and can bring deeper functionality into their FileMaker custom app quickly.

About the cross-platform compatibility: I built this file primarily on the mac OS. On Windows, most of the integrations work well; some are missing cool animations and such. So be aware of that. I’m looking into dealing with this compatibility issue. I hope to have an update soon.

A final note: the integrations presented in this library are the result of the work I did to find usable features for FileMaker data. All of these integrations should carefully be considered before implementing into your own custom app. Also, as I mentioned, this is version 1.0. I plan to keep adding resources to this library. Stay tuned to this blog for updates to the file, and if you have any integrations that would be great, let me know!

PS. I am extremely grateful to colleagues here at Soliant: Agnes Riley, Mike Duncan, Wim Decorte, and others for offering specific feedback on this file. My buddy David Jondreau pitched in and offered some good UX experience. And to Jan Jung. She helped get these posts up and kept reminding me to finish the accompanying videos. Her eagle eye ensured that everything looks good. Thanks all.

Get the Demo File

Next Post


These are the other blog posts and videos that go into further detail about this file and how to integrate these into your own custom apps:

The post Introducing the Web Viewer Integrations Library appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

This is the second in the series of posts about the Web Viewer Integrations Library.

In the previous post, I introduced to you the Web Viewer Integrations Library, a labor of obsession and passion around the web viewer object. It has been a lot of fun to put together, and I am glad to be able to share it.

In this post, I want to review the data structure of this library, how the file is set up to make these integrations happen. In another post, we will examine the actual file and see its many features, but here we will simply focus on how the file is built.

A Typical Web Page

A typical web page consists of many files hosted on some web server. Figure 1 shows what a typical web page might contain. Web pages usually start off with an index.html page. Inside this page are links to other files that contain the CSS or the Javascript.

According to the web organization W3, web code should be separated into separate files with links in the index.html page for many reasons:

Picture that illustrates typical components of a web page

Figure 1. Typical Web Page (click image to enlarge).

  1. Efficiency of code: the entire web page loads faster when each page has a chance to load into the computer’s memory on its own rather than one file loading everything. Additionally, it is easier to find a specific section in smaller files.
  2. Ease of maintenance: If a developer needs to update the look of the web page, she only needs to open the styles.css page.
  3. Accessibility: Adaptive devices such as screen readers read the text from the HTML file only. It skips over the files with the CSS or the Javascript.
  4. Device Compatibility: An HTML page with no style information inside can be easily adapted depending on the device. If the page is viewed on a phone, the HTML page can access a CSS file with styles designed specifically for that device.
  5. Web Crawlers/Search Engines: Google and other search engines read through the index.html page for the content. Finding content allows your site to be returned in these engines.

The last reason is simply that it is good practice. Standards-aware web developers separate content, style and functionality.


In the code for the C3 Charting Integration shown in Figure 2, there are four files that make this integration happen: the index.html page and three external sources, highlighted in the code. In this case, the files CSS and Javascript files (highlighted in yellow) are stored on some computer local to the index.html page. Further, this C3 Charting library needs to access an external source, shown in blue.

Image showing the C3 charting integration

Figure 2. C3 Charting Integration (click image to enlarge).

Adapting for FileMaker

In FileMaker, we don’t have quite this flexibility; It is a bit more difficult to work with multiple files needed for an integration. So we need to come up with a different solution. Let’s take a moment to look at those options.

Inside the Web Viewer Setup

Sometimes developers choose to put all the code for an integration (the index.html, the css file and all the Javascript files) inside the web viewer itself into one long piece of text. This method has two major disadvantages:

  1. A calculation dialog’s limit is 30,000 characters.
  2. It is very difficult to find and modify some particular part of the integration such as the background color of one element or the functionality of one Javascript function.

Inside a Text Object

Other people put the all the text into a text object that is hidden on the layout. It does give you a bit easier access to the code and has no length restriction, but it is still difficult to modify and you always have to modify the text in layout mode.

The Chosen Solution

In this library, I’ve chosen to place what would be separate web files into separate fields. We start with an HTML field, I call it the “HTML template” The CSS code is in a field called “CSS1″, and the Javascript files are in fields named “JS1″, “JS2″, and so on.

In order to incorporate these separate fields into the web page’s code, the content of the the HTML fields needs to be combined together into one calculation. I am using placeholder text, such as “**CSS1**” in the HTMLTemplate field to then be replaced by the content of the HTML::CSS1


Thus all the code needed for an integration is found in the fields. This library contains 22 integrations, so there are 22 records. In the effort to standardize these integrations into this file, I’ve created three CSS fields, three Javascript fields, and three data fields. These are filled in with whatever code is needed, and in the proper place in the HTML template field, the placeholder text is set. The integrations presented here nicely fit within this model.

There are some advantages to this method.

Replace the code

Figure 3. Replacing the code (click image to enlarge).

  1. An integration will work online or offline because all the code is stored locally in fields of a record.
  2. The text placed inside each field most likely will not come close to the 10,000,000 character limit per text field.
  3. Specific to this library, It is very easy to export an integration from here and place it into your custom app. As we’ll discuss in a later post, exporting an integration record is very easy.

The fact that the code is stored locally is an advantage and a disadvantage. Locally-stored code, as we said above, doesn’t require outside resources to run, but the code is static and won’t change automatically. If the jQuery library updates from say version 3.1.1 to 3.1.2, the integrations stored locally will not automatically be able to use the new features of the new version. We would have to manually go find the new version of the jQuery library and import it into the correct fields. However, this is little trouble; if our integration is working satisfactorily, there may be no need to get the newest version.

This FileMaker Web Viewer Integrations library is set up in a way that allows for an efficient implementation for any custom app.

In the next post we will take a look at the features of this file as you use it to manipulate and export an integration.

Get the Demo File

Next Post

These are the other blog posts and videos that go into further detail about this file and how to integrate these into your own custom apps:

The post Data Structure of the Web Viewer Integrations Library appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

This is the third in the series of posts about the Web Viewer Integration Library.

The Web Viewer Integrations library is a file meant for anyone’s use in their own learning and as a resource for deeper, richer functionality to any custom app. But it does take a moment to walk through to learn how it is set up.

The library is set up to give you the ability to find an integration that you need and push it into your own custom app with very little effort. As a test, I was able to push the photo album into a client’s app in about 30 minutes. That time includes setting up the table structure, importing the code and revising the script for the specific app. One-half hour is not bad for a slick-looking photo album.

Let’s walk through the file and let me acquaint you with it.

Let Me Be Your Guide


After picking an integration from the dashboard, you’re presented with the demo or the code view (see Figure 1).

The demo view provides you with a complete look at the end result, and in many cases, it fully interacts with the sample FileMaker data. For example, on the Calendar integration, you can click on an appointment and modify the time, date or category. If there’s a + button on the menu bar, you can create a new record and watch it appear.

The code view shows how the integration is set up. Here is where you’ll spend the most time learning about an integration, manipulating it, and pushing it to your custom app. Let’s take a close look at the code view (see Figure 2).


Web Viewer Integration Library - Dashboard Options

Figure 1. Dashboard Options (click image to enlarge).

Code view

Figure 2. Code View (click image to enlarge).

On the left, tabs hold information about this integration.

  • Notes Tab: A description of the integration, developer notes and a guide to the scripting needed to gather data or perform a callback function inside the web viewer.
  • Code Tab: This tab gives you access to the HTML, CSS, and JS text that is needed for the integration.
  • Final HTML Tab: The complete code after all components have been gathered together.
  • Source Tab: An area where the source is listed as well as some helpful resources.

Menu Bar

In the upper right is the menu bar which provides more navigation and more functionality (see Figure 3).

Menu bar

Figure 3. Menu Bar (click image to enlarge).

  • Preview: Previews the full demo.
  • Print Notes: Print the entire page of notes about the integration.
  • Export: This is the functionality to export the code and push to your own integration.
  • Settings: This is sort of the set up that has been done to make this work.
  • Default: Here you can set the current code as your default and store it safely.
  • Reset: Here you can choose to reset any part of the code you’ve messed up somehow.
  • Data: Navigation to go to the data table for the data shown in the web viewer.
  • Other: Navigation to go to another integration
  • Home: Navigation to go back to the Dashboard.

We will get into specifics about each one of these features in upcoming posts.

Code Tab

The code tab (see Figure 4) is where you’ll spend most of your time. On the left is the HTML template field, and inside popovers is the code that is needed for the integration. The labels make it clear which fields contain CSS or Javascript.

Code Tab

Figure 4. Code Tab (click image to enlarge).

The Hidden Fields

As you study this file, you’ll notice there are some other fields in the HTML table that are not available for editing. While putting this together, I found that many integrations shared a common library. They shared either the jQuery (or the jQuery.min) library, or they shared the jQuery UI library. I decided to not make this code available for editing since it is the main library. However, it is available for export when you go to push an integration from this file to your own custom app.

A Simple Set Up

That’s it. It is a rather simple file, filled with powerful integrations. And the file is easy to use: to adjust the code as necessary and to push an integration to your own custom app. I hope you get lots of use out of this. Feel free to play around with the library. And please, if you have any suggestions on how its functionality could be used, let me know and I’ll implement it in the next version.

I hope this gives you a good understanding of this library and how to use it to its fullest capabilities.

Happy playing!

Get the Demo File

Next Post

These are the other blog posts and videos that go into further detail about this file and how to integrate these into your own custom apps:

The post Getting to Know the Web Viewer Integrations Library appeared first on Soliant Consulting.


Afficher la totalité du billet

Soliant Consulting

This is the fourth in the series of posts about the Web Viewer Integrations Library.

Google Auto-Complete Adress integration

Figure 1. Google Auto-Complete Address integration (click image to enlarge).

The FileMaker Web Viewer Integrations library is a great source for deeper functionality for your custom app. There are 22 web integrations that might be a great solution for a use case you’re trying to solve for yourself or for a client. For example, you can use the “Google Auto-Complete Address” integration to use the power of Google to get the exact full address as the user starts to type it in as shown in Figure 1.

That is extremely handy.

Or the Data Tables integration. This could be a good substitute for a typical list view in FileMaker (see Figure 2). The integration allows a user to click on a column header to sort by that column or to do a filter to see only certain records.

Figure 2. Data tables integration

Figure 2. Data tables integration (click image to enlarge).

Regardless of the integration that becomes useful to you, it might need some revision of its current state. This library provides the perfect opportunity to adjust what is here to make it work well for your custom app. You might change the colors or the font. You might want to add additional functionality, or even remove some of the functionality. Whatever needs to happen, you can do it here without fear of messing up your app.

For those that have less experience with the languages of the web (HTML5, CSS, JS…), you can use this tool to learn more about how the CSS interacts with the rendering. While this library does not give you a complete understanding, it can add to your knowledge. I know I learned a lot while setting up these integrations.

This library allows you to play with and update the settings as necessary. There are a few features in here that allow for this. Let’s take a look at them.

Features of the Library

The Default


Each integration can be manipulated to your satisfaction, but there’s always a danger, in the manipulation, to mess the code up beyond repair. I bungled the code many times in my own work with this. Agnes Riley had the great suggestion of creating a table of the default code: a record hidden in the database that you can fallback on for any reason to restore the original code.

A benefit of this is that once you’ve manipulated the code perfectly for your use, you can set that as the new default. That way it becomes your default ready for further manipulation.

Reset

In the event you have to revert back to the default code, there’s a reset button. This gives you the option to reset any or all of the text, including the data sample provided. I’ve used this often. It is a lifesaver.


Reset to Default popover

Figure 3. Reset to Default popover (click image to enlarge)

Reset message

Figure 4. Reset message (click image to enlarge).

Expanded View

Figure 5. Expand view

Figure 5. Expand view (click image to enlarge)

The CSS, JS and Data fields are found in popovers. You can simply open them up to view and edit the existing code. See Figure 5.

Playing Around

This library is yours to manipulate. If you don’t like the colors I’ve provided, change them. As an example, let’s look at the Data Maps integration.

I want to integrate into a client’s custom app the map of the US showing different colors depending on the underlying data. These default colors are decent, but they don’t fit the color scheme of my client’s file.

I also want to remove the gray background, and make the sub-header font bigger. I can make those changes easily.

The first thing I’ll do is make sure there’s a default code. So I’ll set the default to what I’ve got on the screen at the moment. That probably will come in handy as we move along.

The next step is to get rid of the gray background and make the sub-head font bigger. Since those elements are part of the HTML, the place to manipulate those is in the CSS.

Changing the CSS

CSS1 provides the text we need to update.

Here I can simply change the color of the html background-color attribute. Colors rendered in HTML are in the format of the Hex value, so I can use the Hex Color Picker, found as a link in the Sources tab to find the color with which I want to update the color. I chose a plain white background, or code #ffffff. You choose what you wish.

Now I need to update the sub-head. It is too small compared to the title and the map. So I’ll find the tag that surrounds the text. It is in a <header> <p> </p> </header> tag set, so I need to find the styling of that attribute in the CSS1.

I made my font-size attribute to be 20px. That looks good.


Edit the CSS

Figure 6. Edit the CSS (click image to enlarge).

With those two simple changes, my integration looks better. Since I want to keep it that way, I’m going to press the “Default” button to set this as my new default.

Data Map before

Figure 7. Data Map before

Data Map After

Figure 8. Data Map After

I need to make one more change: the colors of the states. Those look pretty bright, and they’re the wrong color for my custom app.

JavaScript (JS2 field)

This integration works by drawing each state in its correct location and giving it a color based on the data presented. In this case the data text in the data tab, shows the “fillKey” attribute for each state object. The fillKey for Alabama is “HIGH” based on the data (Note the fillKey’s attribute is case-sensitive). So the state of Alabama will get whatever color I’ve assigned to the HIGH attribute. This is found in the JavaScript, in the JS2 field.
I can update the colors of each type of state here in the JS2 fields (see Figure 9). Using the hex color picker in the source tab, I can select any four colors I want.

Once I’ve got the colors how I want them, I’ll be sure and set this current code as the default code.

Edit the JavaScript in the JS2 field.

Figure 9. Edit the JavaScript in the JS2 field (click image to enlarge).

Data map showing changes to the CSS and JavaScript.

Figure 10. Data map showing changes to the CSS and JavaScript (click image to enlarge).

Finished

With just a few steps and a small hunt through the code, we are able to make changes and save it as the default.  You can change anything in here. If you know a bit of JavaScript, you can update the code to do something different than what is currently set.
Play around with the file; experiment with the code and see what happens in the preview pane.
As always feel free to reach out if you need some advice on manipulating a particular integration. I’d be happy to lend a hand.
Enjoy!

Get the Demo File

Next Post

These are the other blog posts and videos that go into further detail about this file and how to integrate these into your own custom apps:

The post Playing with a Web Viewer Integration appeared first on Soliant Consulting.


Afficher la totalité du billet