Soliant Consulting

  • Compteur de contenus

  • Inscription

  • Dernière visite


À propos de Soliant Consulting

  • Rang

Profil général

  • Genre
    Non précisé

Profil FileMaker

  • Certification
    FileMaker 15 Certified Developer
  1. In my previous blog post I wrote about handling form data with Formidable, but I didn’t mention how to work with file uploads. This is because Formidable by itself does not handle file uploads at all, but only string data. So by now many people asked be already how to handle that, if not with that library itself. My answer to that is quite simple: Use the tools your PSR-7 middleware file upload implementation already gives you. Meet the UploadedFileInterface Any library implementing PSR-7 has a method getUploadedFiles() on their server request implementation. This method returns an array of objects implementing Psr\Http\Message\UploadedFileInterface. There are many ways that files can be transmitted to the server, so let’s roll with the simplest one right now, where you have a form with a single file input and nothing else, in which case your middleware may look something like this: <?php use Interop\Http\ServerMiddleware\DelegateInterface; use Interop\Http\ServerMiddleware\MiddlewareInterface; use Psr\Http\Message\ServerRequestInterface; use Psr\Http\Message\UploadedFileInterface; final class UploadMiddleware implements MiddlewareInterface { public function process(ServerRequestInterface $request, DelegateInterface $delegate) { $uploadedFiles = $request->getUploadedFiles(); if (!array_key_exists('file', $uploadedFiles)) { // Return an error response } /* @var $file UploadedFileInterface */ $file = $uploadedFiles['file']; if (UPLOAD_ERR_OK !== $file->getError()) { // Return error response } $file->moveTo('/storage/location'); // At this point you may want to check if the uploaded file matches the criteria your domain dictates. If you // want to check for valid images, you may try to load it with Imagick, or use finfo to validate the mime type. // Return successful response } } This is a very basic example, but it illustrates how to handle any kind of file upload. Please note that the Psr\Http\Message\UploadedFileInterface doesn’t give you access to temporary file name, so you actually have to move it to another location first before doing any checks on the file. This is to ensure that the file was actually uploaded and is not coming from any malicious source. Integration with Formidable The previous example just gave you an idea for handling a PSR-7 middleware file upload on its own, without any further data transmitted with the file. If you want to first validate your POST data, your middleware could look similar to this: <?php use DASPRiD\Formidable\FormError\FormError; use DASPRiD\Formidable\FormInterface; use Interop\Http\ServerMiddleware\DelegateInterface; use Interop\Http\ServerMiddleware\MiddlewareInterface; use Psr\Http\Message\ServerRequestInterface; final class UploadMiddleware implements MiddlewareInterfacenterface { /** * @var FormInterface */ private $form; public function process(ServerRequestInterface $request, DelegateInterface $delegate) { $form = $this->form; if ('POST' === $request->getMethod()) { $form = $form->bindFromRequest($request); if (!$form->hasErrors()) { $fileUploadSuccess = $this->processFileUpload($request); if (!$fileUploadSuccess) { // Persist $form->getValue(); } $form->withError(new FormError('file', 'Upload error')); } } // Render HTML with $form } private function processFileUpload(ServerRequestInterface $request) : bool { // Do the same checks as in the previous example } } As you can see, you simply stack the file upload processing onto the normal form handling, they don’t have to interact at all, except putting an error on the form for the file element. The post PSR-7 Middleware File Upload with Formidable appeared first on Soliant Consulting. Afficher la totalité du billet
  2. If you use Pardot to handle your marketing campaigns and have tried to integrate your Google AdWords to your Salesforce org, you have probably noticed that Google does not provide any step-by-step solutions on how to integrate all three of them together to track your clickable ads. It took some time, but after some coding changes and a rather long phone call with Google, there is a solution that can now be followed to solve this. If you are using a native Salesforce web-to-lead form, then you can find standard support here from Google. If you use Pardot for your landing pages, continue reading below to get some help integrating Google AdWords and Salesforce through Pardot. Setting Up Your Files Create new GCLID Fields To start off, let’s create new GCLID fields on both the opportunity and lead objects. See Figures 1 and 2 below. Figure 1. Add the GCID field to the Opportunity (click image to enlarge). Figure 2. Add the GCID field to the Lead (click image to enlarge). After the two fields have been created on the opportunity and lead objects, we must map the fields, as shown in Figures 3 and 4. Figure 3. Begin to map the lead fields. Figure 4. Mapping the Lead and GCID fields Add the script to your landing pages Now that the configurations have been completed, it’s time to touch some code on your website. If you don’t have access to this, contact your webmaster to help with this step. A cookie value needs to be stored on your website to save the GCLID based on the ad that is clicked on. The following script should be added before your tags on all of your landing pages on the website. <script type="text/javascript"> function setCookie(name, value, days) { var date = new Date(); date.setTime(date.getTime() + (days * 24 * 60 * 60 * 1000)); var expires = "; expires=" + date.toGMTString(); document.cookie = name + "=" + value + expires + ";domain=" + location.hostname.replace("www.", \'\'); } function getParam(p) { var match = RegExp(\'[?&]\' + p + \'=([^&]*)\').exec(; return match && decodeURIComponent(match[1].replace(/\+/g, \' \')); } var gclid = getParam(\'gclid\'); if (gclid) { var gclsrc = getParam(\'gclsrc\'); if (!gclsrc || gclsrc.indexOf(\'aw\') !== -1) { setCookie(\'gclid\', gclid, 90); } } </script> Create a hidden field Once this step is completed, we will now focus on the Pardot portion of the integration. To start off, on your landing pages, create a hidden field labeled GCLID. Figure 5 (click image to enlarge) Add code snippet to your form Next, on the same form, click on “Look and Feel” on the menu bar towards the top of the page. You will see a “Below Form” tab which should be clicked on. When clicked, all the way to the right you will see an html button (next to the omega symbol) click that. Figure 6 (click image to enlarge) <script> window.onload = function getGclid() { document.getElementByID("xxxx").value = (name = new RegExp('(?:^|;\\s*)gclid=([^;]*)').exec(document.cookie)) ? name.split(",")[1] : ""; } </script> After this piece of code is inserted into the Pardot form, you are now ready to test the integration between Salesforce and Google AdWords through Pardot. In the URL of your contact us page, add “?gclid=blogTest” (or any testing word) at the end as shown below. Find the information submitted Once you submit the lead information, in Salesforce, go to Leads and find the information that you submitted (see Figure 7). Figure 7 (click image to enlarge) Keyword added to the GCLID field In the GCLID field, you should see the keyword that you entered at the end of the URL in the step above, in my case being “blogTest” as shown in Figure 8. Figure 8 (click image to enlarge) When the link is successful — meaning you see the keyword “blogTest” that you entered into the URL in your lead in Salesforce — then you have now integrated Google AdWords with Salesforce through Pardot! The final step, is to link your Salesforce account to your Google AdWords account. Link Your Salesforce and Google AdWords Accounts Sign in to your Google AdWords account and on the right hand side next to your customer id, you will see a cog. When you click on the cog, there should be a link called “Linked accounts.” Figure 9 Choose accounts to link to Google AdWords After you have clicked the Linked accounts link, you should be on the following page. Here you can choose which accounts to link to your Google AdWords account. In our case, click on “View details” under Figure 10 Log into your Salesforce organization Finally, click on the “+ Account” button on the page and you will be redirected to the Salesforce authentication page to login to your Salesforce organization. Figure 11 (click image to enlarge) Once your Salesforce organization is linked, you will be prompted to set up conversions that are relevant to your Google ads. After you set up these conversions, you are now ready to completely track your clickable ads with AdWords and Salesforce through Pardot. The post Integrate Google AdWords with Your Salesforce Org Through Pardot appeared first on Soliant Consulting. Afficher la totalité du billet
  3. Introduction For many years, I’ve been using Zend_Form from Zend Framework 1, Zend\Form from Zend Framework 2 and also a few other form libraries. With the advent of Zend Framework 3 and more type hinting options in PHP 7 I started to wonder if there is a way to handle forms in a nicer way. I got a little sick of libraries trying to dictate the resulting HTML or just making it really hard to create custom HTML. So what I did is what I always do when I’m in this position; I look around different frameworks, even from other languages, to see how others solved the problem. After a few days of research, I ended up liking the approach of the Play Framework a lot, specifically the one in their Scala implementation. The first thing I did was of course learning to read Scala, which took me a little while because the syntax is quite different than what I was used to. After that I was able to understand the structure and how things worked, so I could start writing a PHP library based on that, named Formidable. How it works Formidable works similar to the form libraries you are already familiar with, yet it is slightly different. There is no mechanism in place to render any HTML, although it comes with a few helpers to render generic input elements, but those are mostly for demonstration to build your own renderers on. Also, every object within Formidable is considered immutable, so when passing around a form object, you can be sure that it’s just for you and nothing else modified it. A form object always has a mapping assigned, which takes care of translating values between the input (usually POST) and a value object. There is no magic going on to hydrate entities directly, but everything goes through those value objects. The mappings are also responsible for validating your input, but offer no filter mechanism. Before I started writing this library, I analyzed all of my prior projects and discussed with other developers, and the only real pre-validation filtering we ever did was always just triming the input, which also became a default in Formidable. In the rare use cases we could think of where special filters really were called for, we decided I won’t go into detail about how you build forms with Formidable, as that topic is explained in detail in the Formidable documentation. Instead, I’m going to tell you about how to use the resulting forms properly. Using Formidable forms Let’s say we have a form for blog entries, which would mean that we’ll have a value object taking the title and the content from the form, and also being responsible for actually creating blog entries from itself and updating existing ones: Example value object final class BlogEntryData { private $title; private $content; public function __construct(string $title, string $content) { $this->title = $title; $this->content = $content; } public static function fromBlogEntry(BlogEntry $blogEntry) : self { return new self( $blogEntry->getTitle(), $blogEntry->getContent() ); } public function createBlogEntry(int $creatorId) : BlogEntry { return new BlogEntry($creatorId, $this->title, $this->content); } public function updateBlogEntry(BlogEntry $blogEntry) : void { $blogEntry->update($this->title, $this->content); } } As you can see, our value object has all the logic nicely encapsulated to work with the actual blog entry. Now let’s see how our middleware for creating blog entries would look like: Example create middleware use DASPRiD\Formidable\Form; use Psr\Http\Message\ServerRequestInterface; final class CreateBlogEntry { private $form; public function __construct(Form $form) { $this->form = $form; } public function __invoke(ServerRequestInterface $request) { if ('POST' === $request->getMethod()) { $form = $this->form->bindFromRequest($request); if (!$form->hasErrors()) { $blogEntryData = $form->getValue(); persistSomewhere($blogEntryData->createBlogEntry(getUserId())); } } else { $form = $this->form; } return renderViewWithForm($form); } } The update middleware requires a bit more work, since we have to work with an already existing blog entry, but it will mostly look the same to our create middleware: Example update middleware use DASPRiD\Formidable\Form; use Psr\Http\Message\ServerRequestInterface; final class UpdateBlogEntry { private $form; public function __construct(Form $form) { $this->form = $form; } public function __invoke(ServerRequestInterface $request) { $blogEntry = getBlogEntryToEdit(); if ('POST' === $request->getMethod()) { $form = $this->form->bindFromRequest($request); if (!$form->hasErrors()) { $blogEntryData = $form->getValue(); $blogEntryData->update($blogEntry); persistSomewhere($blogEntry); } } else { $form = $this->form->fill(BlogEntryData::fromBlogEntry($blogEntry)); } return renderViewWithForm($form); } } Rendering As I wrote earlier, Formidable is in no way responsible for rendering your forms. What it does give you though is all the field values and error messages you need to render your form. By itself it doesn’t tell you which fields exist on the form, so your view does need to know about that. Again, the documentation gives you a very good insight about how you can render your forms with helpers, but here is a completely manual approach to it, to illustrate how Formidable works at the fundamental level: Example form HTML <form method="POST"> <?php if ($form->hasGlobalErrors()): ?> <ul class="errors"> <?php foreach ($form->getGlobalErrors() as $error): ?> <li><?php echo htmlspecialchars($error->getMessage()); ?></li> <?php endforeach; ?> </ul> <?php endif; ?> <?php $field = $form->getField('title'); ?> <label for="title">Title:</label> <input type="text" name="title" id="title" value="<?php echo htmlspecialchars($field->getValue()); ?>"> <?php if ($field->hasErrors()): ?> <ul class="errors"> <?php foreach ($field->getErrors() as $error): ?> <li><?php echo htmlspecialchars($error->getMessage()); ?></li> <?php endforeach; ?> </ul> <?php endif; ?> <?php $field = $form->getField('content'); ?> <label for="title">Content:</label> <textarea name="title" id="title"><?php echo htmlspecialchars($field->getValue()); ?></textarea> <?php if ($field->hasErrors()): ?> <ul class="errors"> <?php foreach ($field->getErrors() as $error): ?> <li><?php echo htmlspecialchars($error->getMessage()); ?></li> <?php endforeach; ?> </ul> <?php endif; ?> <input type="submit"> </form> As I said, this is a very basic approach with a lot of repeated code. Of course you are advised to write your own helpers to render the HTML as your project calls for it. What I personally end up doing most of the time is writing a few helpers which wrap around the helpers supplied by Formidable and have them wrap the labels and other HTML markup around the created inputs, selects and textareas. There is a big advantage to decoupling presentation from the form library which you may already appreciate if you’ve wrestled with other popular libraries which bake in assumptions about how to markup the output. Final words I hope that this blog post gave you a few insights on Formidable and made you hungry to try it out yourself. It currently supports PHP 7.0 and up, and I like to get feedback when you see anything missing or something which can be improved. As written on the Github repository, there is still a small part missing to make it fully typehinted, which are generics in PHP. I’ve created an RFC together with Rasmus Schultz a while back, but we are currently missing an implementer, which is why the RFC is somewhat on hold. If you know something about PHP internals, feel free to hop in to make generics a reality for us! I really have to thank Soliant at this point, who sponsored the development time to create Formidable! The post Formidable – A Different Approach to Forms appeared first on Soliant Consulting. Afficher la totalité du billet
  4. Salesforce Lightning looks great and works beautifully. To enhance it, I’ve added a new Multiselect component. Enjoy! Salesforce Lightning Multiselect This is another component blog… just a small one this time, showing you how to create and use my new Multiselect component. For some of my other components, please look here: Lookup – Embed a Lightning Lookup in a Visualforce Page Datepicker – Lightning Datepicker What I’m going to show is how to take the static HTML defined on the Salesforce Lightning Design System (SLDS) web page and turn that into an actual, working component. Method Define the event that you’ll be using first. This event is used to tell the parent component that the selected value(s) have changed The event is called the “SelectChange” event. <aura:event type="COMPONENT" description="Despatched when a select has changed value" > <aura:attribute name="values" type="String[]" description="Selected values" access="global" /> </aura:event> Next, we add the markup for the actual component itself. It is composed of the button to trigger the dropdown and the dropdown itself. The button contains an icon triangle and some text indicating what has been selected. The dropdown list is an list driven by aura iteration. All selection/deselection logic is driven by the controller and helper classes. <aura:component > <!-public attributes--> <aura:attribute name="options" type="SelectItem[]" /> <aura:attribute name="selectedItems" type="String[]" /> <aura:attribute name="width" type="String" default="240px;" /> <aura:attribute name="dropdownLength" type="Integer" default="5" /> <aura:attribute name="dropdownOver" type="Boolean" default="false" /> <!-private attributes--> <aura:attribute name="options_" type="SelectItem[]" /> <aura:attribute name="infoText" type="String" default="Select an option..." /> <!-let the framework know that we can dispatch this event--> <aura:registerEvent name="selectChange" type="c:SelectChange" /> <aura:method name="reInit" action="{!c.init}" description="Allows the lookup to be reinitalized"> </aura:method> <div aura:id="main-div" class=" slds-picklist slds-dropdown-trigger slds-dropdown-trigger--click "> <!-the disclosure triangle button--> <button class="slds-button slds-button--neutral slds-picklist__label" style="{!'width:' + v.width }" aria-haspopup="true" onclick="{!c.handleClick}" onmouseleave="{!c.handleMouseOutButton}"> <span class="slds-truncate" title="{!v.infoText}">{!v.infoText}</span> <lightning:icon iconName="utility:down" size="small" class="slds-icon" /> </button> <!-the multiselect list--> <div class="slds-dropdown slds-dropdown--left" onmouseenter="{!c.handleMouseEnter}" onmouseleave="{!c.handleMouseLeave}"> <ul class="{!'slds-dropdown__list slds-dropdown--length-' + v.dropdownLength}" role="menu"> <aura:iteration items="{!v.options_}" var="option"> <li class="{!'slds-dropdown__item ' + (option.selected ? 'slds-is-selected' : '')}" role="presentation" onclick="{!c.handleSelection}" data-value="{!option.value}" data-selected="{!option.selected}"> <a href="javascript:void(0);" role="menuitemcheckbox" aria-checked="true" tabindex="0" > <span class="slds-truncate"> <lightning:icon iconName="utility:check" size="x-small" class="slds-icon slds-icon--selected slds-icon--x-small slds-icon-text-default slds-m-right--x-small" />{!option.value} </span> </a> </li> </aura:iteration> </ul> </div> </div> </aura:component> As you can see, this is mostly just basic HTML and CSS using the Salesforce Lightning Design System. To make it work, we implement a Javascript controller and handler. These Javascript objects load and sort “items” into the select list: init: function(component, event, helper) { //note, we get options and set options_ //options_ is the private version and we use this from now on. //this is to allow us to sort the options array before rendering var options = component.get("v.options"); options.sort(function compare(a,b) { if (a.value == 'All'){ return -1; } else if (a.value &lt; b.value){ return -1; } if (a.value &gt; b.value){ return 1; } return 0; }); component.set("v.options_",options); var values = helper.getSelectedValues(component); helper.setInfoText(component,values); }, As you can see, I’m not touching any HTML – I’m relying on Lightning’s binding framework to do the Actual rendering – by adding to the options list, Lightning will apply that to the “ object defined in the component and render the list (hidden initially). Also note that there is an ‘All’ value that the system expects. Change this to whatever you like, or even remove it, but remember to change the text here in the controller :). Another interesting area to explain is how selecting/deselecting is done: handleSelection: function(component, event, helper) { var item = event.currentTarget; if (item &amp;&amp; item.dataset) { var value = item.dataset.value; var selected = item.dataset.selected; var options = component.get("v.options_"); //shift key ADDS to the list (unless clicking on a previously selected item) //also, shift key does not close the dropdown (uses mouse out to do that) if (event.shiftKey) { options.forEach(function(element) { if (element.value == value) { element.selected = selected == "true" ? false : true; } }); } else { options.forEach(function(element) { if (element.value == value) { element.selected = selected == "true" ? false : true; } else { element.selected = false; } }); var mainDiv = component.find('main-div'); $A.util.removeClass(mainDiv, 'slds-is-open'); } component.set("v.options_", options); var values = helper.getSelectedValues(component); var labels = helper.getSelectedLabels(component); helper.setInfoText(component,values); helper.despatchSelectChangeEvent(component,labels); } }, I am using a custom object: ‘SelectItem’ because I’m not able to create a ‘selected’ attribute on Salesforce’s built in version. In the code above, I’m looking at this value and either adding the item to the list, replacing the list with this one item or removing it. In this case I’m using the shift key, but this can be customized to any key. Finally, I update the text with the new value and if multiple value, the count of values. One tricky area was handling hiding and showing of the select list – I use the technique below: handleClick: function(component, event, helper) { var mainDiv = component.find('main-div'); $A.util.addClass(mainDiv, 'slds-is-open'); }, handleMouseLeave: function(component, event, helper) { component.set("v.dropdownOver",false); var mainDiv = component.find('main-div'); $A.util.removeClass(mainDiv, 'slds-is-open'); }, handleMouseEnter: function(component, event, helper) { component.set("v.dropdownOver",true); }, handleMouseOutButton: function(component, event, helper) { window.setTimeout( $A.getCallback(function() { if (component.isValid()) { //if dropdown over, user has hovered over the dropdown, so don't close. if (component.get("v.dropdownOver")) { return; } var mainDiv = component.find('main-div'); $A.util.removeClass(mainDiv, 'slds-is-open'); } }), 200 ); } } When the button is clicked, the list is shown. When the mouse leaves the button, but does not enter the dropdown – it closes When the mouse leaves the button, and enters the dropdown, the close is cancelled. When the mouse leaves the list, it hides. Seems simple, but getting it working nicely can be tough. To use, simply add as part of a form (or without if you’d like): <div class="slds-form-element"> <label class="slds-form-element__label" for="my-multi-select">Multi Select!!</label> <div class="slds-form-element__control"> <c:MultiSelect aura:id="my-multi-select" options="{!v.myOptions}" selectChange="{!c.handleSelectChangeEvent}" selectedItems="{!v.mySelectedItems}" /> </div> </div> Here’s what it looks like: The MultiSelect item in action That’s all for now. Download the files on Github Enjoy! The post Create a Custom Salesforce Lightning Multiselect Component appeared first on Soliant Consulting. Afficher la totalité du billet
  5. This blog post examines the functionality of two of FileMaker’s features and how they work together. The first is the Web Viewer, which is a special layout object that can display web content right in your FileMaker app. The next is WebDirect, which is FileMaker Server’s ability to automatically display your custom FileMaker app in a web browser. Web Viewers and WebDirect We have received several inquiries regarding the issue of Web Viewers not rendering in WebDirect. As these techniques become more popular, this may be an issue more developers experience. When first debugging the issue, it was assumed to be a limitation of WebDirect. However, after discussing with co-workers Jeremy Brown and Ross Johnson, a couple workarounds were discovered. The solution discussed here is the simplest and most elegant. First, the Web Viewer, when shown on a FileMaker Pro layout, runs as its own independent web page, just like you would open a new tab in your web browser and load a URL. However, in WebDirect, content needs to be loaded inside the web page as the content of an “iframe” entity. Iframes are a special type of HTML entity meant to easily specify and display other HTML content to display within that iframe object. The remote content of an iframe object is referenced as an attribute, at a very basic level, like so: <iframe src="your_url_here"></iframe> Seems pretty straightforward, right? However, arbitrarily long URLs or odd characters may cause the iframe to break and not load. JavaScript Graphs JavaScript can be a great option to expand the functionality to include just about any type of graph you can imagine and populate it with your FileMaker data. If you have used JavaScript, such as in Jeremy Brown’s useful Web Viewer Integrations Library, to display graphs in the Web Viewer via data URLs, you may run into issues when displaying in WebDirect. Data URIs You are probably familiar with URLs that start with “http” or https” but there are many other types of uniform resource identifiers (URI). A data URI, instead of including a location, embeds the data to be displayed directly in the document. We commonly use them in FileMaker to construct HTML to display in a web viewer, and avoid network latency and dependencies, including JavaScript. For example, setting Web Viewer with html, preceding it like this: "data:text/html,<html>…</html>" The issue with displaying arbitrarily large or complex data URLs in WebDirect is that the “src” attribute has the potential to break with some JavaScript included as part of the data URI. There is likely an unsupported character or combination somewhere in the included libraries that makes it incompatible with loading as a data URI directly. What to Do? Part of the syntax of a data URI allows for specifying the content as being encoded as Base64. data:[<mediatype>][;base64],<data> Typically, you would use this to represent non-textual data, such as images or other binary data. In this case, it can still be applied when the media type is “text/html” as well. This provides a safe way of transferring that html data so it will be unencoded by the web browser, where it is rendered at runtime. Admittedly, this introduces a little more processing that has to happen somewhere, and can cause a slight delay when rendering in FileMaker Pro vs. not encoding as Base64. However, we can test to see if a user is in WebDirect or not, and direct the output of the Web Viewer appropriately. Case ( PatternCount ( Get ( ApplicationVersion ) ; "Web" ) ; "data:text/html;base64," & Base64Encode ( HTML::HTML_Calc_Here ) ; "data:text/html," & HTML::HTML_Calc_Here ) Note the addition of “;base64” if the application is coming from a “Web” client. With this test, we optimize for both clients and ensure that our content functions everywhere. Here is the result in FileMaker Pro: Results in FileMaker Pro (click image to enlarge). The same layout viewed in WebDirect Layout viewed in WebDirect (click image to enlarge). You really have to look twice to see what screen shot belongs to which application! Other Considerations There are other factors to consider that may cause issues as well. So far, the assumption has been made that all JavaScript and assets are being loaded inline, without externally references. You may still choose to have external references. Just be aware that loading them in an iframe element may behave differently than how they are handled in a FileMaker Pro client. It is a best practice to have an SSL certificate installed on your production FileMaker Server, and WebDirect will automatically use that certificate as well. That means that, with SSL enabled, WebDirect will redirect clients from HTTP requests to HTTPS. The consequence of that is that all your content must also be secure, as far as your web browser is concerned. A HTTP site can reference HTTPS assets, but not the other way around. Make sure if you have SSL enabled that all external references, such as linked JavaScript libraries, are all referenced with HTTPS as well. For development servers using a self signed certificate… well, pretty much nothing will load correctly because the web browser will not want to load anything served from a certificate it cannot verify. The main site will load, but not when trying to include content from other sites in the page. Then there are occasions where you may need to write your own web page to display in a Web Viewer, hosted from another web server entirely. In that case, you may need to enable CORS headers for it to work. Again, in FileMaker Pro clients it works fine, but in WebDirect it loads as an iframe, and becomes a security concern in web browsers to prevent cross site scripting. How to Support CORS in PHP If you host your PHP page from the same FileMaker Server, making sure to match http vs. https, then there is no conflict about JavaScript loading from a different source. If, for some reason, you want to have the file load from a different location, you will want to add CORS support in your PHP file as well. The final PHP file will look something like this: <?php // enable cors // Allow from any origin if (isset($_SERVER['HTTP_ORIGIN'])) { header("Access-Control-Allow-Origin: {$_SERVER['HTTP_ORIGIN']}"); header('Access-Control-Allow-Credentials: true'); header('Access-Control-Max-Age: 86400'); // cache for 1 day } // Access-Control headers are received during OPTIONS requests if ($_SERVER['REQUEST_METHOD'] == 'OPTIONS') { if (isset($_SERVER['HTTP_ACCESS_CONTROL_REQUEST_METHOD'])) header("Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS"); if (isset($_SERVER['HTTP_ACCESS_CONTROL_REQUEST_HEADERS'])) header("Access-Control-Allow-Headers: {$_SERVER['HTTP_ACCESS_CONTROL_REQUEST_HEADERS']}"); } One other consideration, which I found when using one FileMaker Server to host a file for different WebDirect served solutions, was that there is an added HTTP header that is configured in the default site on FileMaker Server’s web server. This is done for added security for WebDirect to protect against cross site scripting attacks, so you may or may not want to adjust this setting for your needs. If on a Windows server, you will find this setting in the IIS configuration for HTTP Headers, that it adds a header for “X-Frame-Options” set to require the same origin. If you need to serve this PHP page from a different server, you will need to remove this header as being served by default. Then, in addition to the CORS support, this script will work from different servers. This may be seen as lowering the security on that machine and should probably be avoided by hosting your scripts on a different server, if needed. References Introducing the Web Viewer Integrations Library – Soliant Blog Data URIs – Mozilla FileMaker WebDirect – FileMaker The post Display Complex Web Viewers in WebDirect appeared first on Soliant Consulting. Afficher la totalité du billet
  6. I’m in the process of studying for my Salesforce certification, and it’s not easy! If you’re ahead of me and already have your certification, you’ve proven that you know all about the newest Salesforce release and you’re ready to send your company’s Salesforce ROI through the roof. That’s a major accomplishment, but make sure you hold onto it! So how do you hold onto your Salesforce certification? The good news is that you don’t have to take the full exam every time you need to prove you’re current; much in the same way you don’t have to take a driving test each time your driver’s license expires, once you establish your Salesforce credentials for a given certification, there are far fewer hoops to jump through to keep it. All you have to do is take a short release exam to maintain that credential for each new release. It’s a pretty rational approach that lets us demonstrate we still know our stuff without adding much strain on our already packed schedules. Once you establish your Salesforce credentials for a given certification, there are far fewer hoops to jump through to keep it. Salesforce Certification Exam Cycle As a Salesforce Developer or Admin, you need to take the exam each release cycle, which happens a bit more often than driver’s licenses expire — a new release comes out about three times per year, every four months or so. Some of the release exams are: Salesforce Certified Administrator Release Exam Salesforce Certified Developer Release Exam Salesforce Certified Platform App Builder Release Exam Salesforce Certified Platform Developer I Release Exam, and Salesforce Certified Pardot Consultant Release Exam Administrator Release Exam If you want to keep your Administrator, Advanced Administrator, Service Cloud Consultant, Sales Cloud Consultant, Community Cloud Consultant, or Field Service Lightning Consultant certification, you’ll need to take the Administrator Release Exam. Developer Release Exam If you want to maintain your certification as a Developer or Advanced Developer, you need to take the Developer Release Exam. Platform Developer I Release Exam To hold the Platform App Builder, Application Architect, or System Architect certification, you’ll have to take the Platform App Builder Release Exam. To be a certified Platform Developer I, Platform Developer II, or Application Architect you’ll need to take the Platform Developer I Release Exam. Marketing Cloud Email Specialist Release Exam If you are a Certified Marketing Cloud Email Specialist or Consultant, you’ll need to take the Marketing Cloud Email Specialist Release Exam. Taking the Exam You must answer about 15 questions in 30 minutes to complete a release exam. It’s an unproctored exam, so you’re allowed to reference whatever literature or online resources you’d like so long as you complete the exam with in the time limit. It’s a good idea to brush up on release notes and watch the videos in Salesforce’s YouTube channel immediately before taking it so that information is fresh in your mind. In addition to taking regular exams, you’ll also need to pay an annual $100 maintenance fee to keep your certification. Be sure to take the exams by the deadlines establishes, which are typically 8 or 9 months before the release deadline. For example: The Summer ’16 Release Exam is due March 24, 2017. The Winter ’17 Release Exam is due July 14, 2017. Important: If you miss the deadline or fail the exam three times, your credentials will expire and you will have to take the full exam again, so make sure you know the deadlines and you’re prepared before you take the exam. Useful Links Salesforce Certification – Exam Schedules – Salesforce University Maintaining Your Salesforce Certification – Salesforce Help Salesforce YouTube Channel – YouTube The post How to Keep Your Salesforce Certification appeared first on Soliant Consulting. Afficher la totalité du billet
  7. Every year Soliant has an offsite where all the offices meet in one place; we wrap up our offsite with a volunteer activity. Last year we worked at the Elache Nature Center in Gainesville, GA. This year we returned to Georgia and had about 24 people who stayed an extra day and volunteered at a local non-profit shelter in Buford, GA. Shelters are always in need of food donations, so everyone participated in our volunteer effort by donating pantry items to the Home of Hope-Gwinnett Children’s Shelter. Volunteering after our offsite has become a tradition that allows us to give back to local communities and strengthen our bonds by working together to help others. Click to view slideshow. We started the day with an excellent breakfast buffet and then carpooled to Buford, GA. The ride to Buford was quiet in the beginning; I think everyone was tired from all the activities we had at the offsite, which was quite busy. We had many professional development sessions throughout the day and lots of entertaining activities at night, including a talent show and an awards dinner. The Next Step Towards Independence As soon as we arrived at the Home of Hope, Bridgette, the Food Services Manager, told us all about their non-profit. It is a residential care facility that provides temporary home and support for homeless children from 0 -17 years old and their mothers. Home of Hope also gives support to girls aging out of the foster-care system. The shelter provides housing, life coaching and educational support to help moms and young ladies to get back on their feet. Their goal is to “not simply to be a place of refuge; we are the next step towards independence.” Next, Bridgette gave us a tour of the facility. It’s a fantastic place, everything was looking new and clean, you could feel the care of the organization in small the details. They offer individual rooms for families, kid’s playroom, quiet rooms, and a business center where the tenants must commit time during the day to look for jobs. Getting Down to Work Home of Hope needed help organizing their storage rooms. Our volunteer group lent a hand for a few hours by sorting and organizing their kitchen/cafeteria, the kids play room, and storage closets. We split into teams, and I was part of the pantry team. The pantry was overflowing with donations, and we could barely enter the pantry because of the boxes. We decided to take everything out, organize by the type of food, and label the shelves. Organizing is one of my favorite things to do, and I had an enthusiastic team that soon found a rhythm to get the task done. It took us a less than a couple of hours to get everything organized. After we had finished our work, we gathered for our last team lunch of the offsite. The volunteer activity was an excellent way to end our Soliant offsite week, and I’m proud to be part of a team that cares about giving back. Soliant volunteers with Bridgett from the Home of Hope The post Volunteering at Home of Hope – Gwinnett Children’s Shelter appeared first on Soliant Consulting. Afficher la totalité du billet
  8. The purpose of this blog is to show you how to add Lightning components to Visualforce pages. I am assuming that you already have basic knowledge of VF pages and also are able to create a basic Lightning component that you can view via a Lightning app. Start off by creating a couple new Lightning components and a Lightning app to hold them. I just used a couple of Lightning components I previously created when learning how to create Lightning components. helloWorld.cmp (see Figure 1) and helloPlayground.cmp (see Figure 2). I then added an app to hold them called ‘’. helloWorld.cmp Figure 1 helloPlayground.cmp Figure 2 <aura:application extends="ltng:outApp"> <c:helloWorld /> <c:helloPlayground /> </aura:application> Notice the ‘xtends=”ltng:outApp”’ in the above app. What this does, is says that this app can be hosted outside of lightning but will continue to use the Salesforce Lightning Design System (SLDS) styling. You can instead choose to not use the SLDS styling if you use ‘ltng:outAppUnstyled’ instead. In my VF page, we have a special include for Lighting with: <apex:includeLightning /> We also need to create a section of the code for the Lightning components to appear in, so a simple one here is: <div id="lightning" /> It looks empty, but we will see to that with some javascript later. $Lightning.use("c:harnessApp", function(){}); Here we use the new app that I created. If you run your page at this point, nothing will happen. The page requires you to manually tell components where to appear. Notice the ‘c:’ in the expression. This refers to the default namespace. If your org has a different namespace than the default, you will need to change the ‘c’ portion to whatever that is. Inside the function that we just created, we add some more lines: $Lightning.createComponent("c:HelloWorld", {}, "lightning", function(cmp){}); This actually reveals the component and places it inside the div with the id of ‘lightning’. Also, you will notice that it only shows one of the components at this point. To add in the next component is pretty simple: $Lightning.createComponent("c: helloPlayground", {}, "lightning", function(cmp){}); If you run it again, you can see both apps now running! NOTE: There might be a slight delay on the components showing up since they are revealed via javascript that needs to execute. Looking at Figure 3, you might notice that the ‘Hello World’ is under the ‘Hello Playground’ even though the javascript above adds in hello world first. I could have added them to their own components to control more of where they show up, but when you add new components to be shown to the page, they will prepend the new component in front of the others. Figure 3 – Both apps running. I made an adjustment to my page so that each one has their own div and I can control better where they show. <apex:page > <apex:includeLightning /> <div id="helloWorld" /> <div id="helloPlayground" /> <script> $Lightning.use("c:harnessApp", function() { $Lightning.createComponent("c:HelloWorld", {}, helloWorld", function(cmp){}); $Lightning.createComponent("c:helloPlayground", {}, “helloPlayground", function(cmp){}); }); </script> </apex:page> Figure 4 – Completed VF Page The post How to Place Lightning Components in Visualforce Pages appeared first on Soliant Consulting. Afficher la totalité du billet
  9. With the release of version 15 in May 2016, FileMaker introduced a new feature – the Top Call Statistics Log – which tracks up to 25 of the most expensive remote calls that occur during a collection interval. I created a video on this topic back in May and am following up now with a written blog. The information here is essentially the same as in the video. My motivation is to create a text-based reference, because I find that to be a more useful reference than a video. Statistics Log Files Some of the actions that a user takes when working with a file hosted on FileMaker Server are processed entirely client-side. An example is sorting data that has already been downloaded to the client. But most actions will result in one or more remote calls which are processed by the server. Some examples include navigating to a layout, creating a new record, committing a record, and performing a find. While the large majority of remote calls are initiated by the client, it is possible for FileMaker Server to initiate a remote call to the client. An example of this is when FileMaker Server asks the client for the values of its global fields. When we talk about “clients”, it is important to realize that this includes server-side scripts, the web publishing engine, and ODBC/JDBC connections in addition to the Pro, Go, and WebDirect clients. When a solution’s performance is suboptimal, it could be due to a specific action that a user (or a group of users) is taking. Before FileMaker 15, we had a view into remote call activity only at an aggregate level, through the usage and client statistics log files. With the top call stats log, we now gain an additional tool which allows us to view statistics for individual remote calls – the top 25 most expensive ones collected during a specified time interval. Using this log file, we now have a chance at pinpointing specific operations which may be causing degraded performance. The information stored in the three statistics log files is gathered during a collection interval whose default value is 30 seconds. Each entry in a statistics log file must be viewed from the context of its collection interval. At the end of every interval, the new information is added to the bottom of the log. Here are the three statistics log files: Log Filename Information Show for Each Collection Interval Usage Statistics Stats.log One entry which summarizes information about all of the remote calls, across all files and clients. Client Statistics ClientStats.log One entry for every client which summarizes information about the remote calls to and from that client * Top Call Statistics TopCallStats.log Up to 25 entries showing discrete (not summarized) statistics from the most expensive remote calls. * According to my understanding, the Client Statistics log is supposed to have only one entry per client for every collection interval, but in my testing, I have sometimes seen more than one entry for a client. Configuring Log Settings The top call statistics log is enabled in the admin console in the Database Server > Logging area as shown in Figure 1. Once enabled, it will continue to capture information even if the admin console is closed. However, if the Database Server is stopped, the top call statistics log will not automatically re-enable once the Database Server is started up again. The top call statistics log can also be enabled or disabled using the command line as shown in Figure 2: fmsadmin enable topcallstats -u admin -p pword fmsadmin disable topcallstats -u admin -p pword Figure 1. Enable top call statistics in the admin console under Database Server > Logging (click image to enlarge). Figure 2. Use the command line to enable/disable top call statistics (click image to enlarge). In addition to enabling and disabling the log, the admin console Database Server > Logging area is used to specify the duration of the collection interval and the size of the log file. The default values are 30 seconds for the collection interval and 40 MB for the log file. The log file size setting pertains to all of the log files, but the collection interval duration is only relevant to the three statistics log files: usage, client, and top calls. When the file size is exceeded, the existing log file is renamed by appending “-old” to the file name, and a new log file is created. If a previous “-old” file already existed, it will be deleted. You can experiment with making the collection interval shorter, but only set it to very short durations (like 1 second) while diagnosing. The client and top call statistics do create additional overhead for the server, so if you are already dealing with a stressed server, there is potential for further performance degradation. And of course the log files will grow in size much more quickly as well. So, this setting should not be kept at very low values indefinitely. Viewing the Log File Figure 3. First Row Option (click image to enlarge). The log file data is stored in a tab-delimited text file with the name TopCallStats.log. For Windows, the default path for all log files is C:\Program Files\FileMaker\FileMaker Server\Logs. The path for Mac servers is /Library/FileMaker Server/Logs/. Unlike with a Mac, the Logs path can be changed on Windows by installing FileMaker Server to a non-default location. There is no viewer built into the admin console for the top call stats log file, so to view the data, you will need to open it in a text editor or an application such as Excel. You can also drag the file onto the FileMaker Pro icon (for example, on your desktop), which will create a new database file and automatically import the log data into it. If you do so, select the option to interpret the first row as field names (see Figure 3). Figure 4. Converted file displaying the top call stats (click image to enlarge). Making Sense of the Top Call Stats Log Data Each line in the log corresponds to a remote call, and each column corresponds to a particular kind of data. Here is the list of all columns followed by a detailed look at each one. Timestamp Start/End Time Total Elapsed Operation Target Network Bytes In/Out Elapsed Time Wait Time I/O Time Client Name Timestamp – This is the timestamp for the collection interval, not for the remote call. In other words, all of the entries that were collected during the same interval will show the same timestamp value. The timestamps use millisecond precision, and the time zone shown is the same as the server. Sample value: 2016-04-23 10:55:09.486 -0500. Start Time – This shows the number of seconds (with microsecond precision) from when the Database Server was started until the time the remote call started. Sample value: 191.235598. End Time – Same as the Start Time, except that this show when the remote call ended. If the remote call was still in progress when the data was collected, this value will be empty. Total Elapsed – Number of microseconds elapsed for the remote call so far. This is the metric that determines which 25 remote calls were the most expensive ones for a given collection interval. The 25 remote calls are sorted in the log based on the Total Elapsed value, with the largest time at the top. Sample value: 1871. Elapsed Time – Number of microseconds elapsed for the remote call for the collection interval being reported on. In the log file, Elapsed Time is shown as a column closer to the end of all of the other columns, but I am elaborating on it now, since it conceptually fits in with the Total Elapsed column. Sample value: 1871. The Total Elapsed and Elapsed Time values will typically be the same, but they will be different for a remote call that began in a previous collection interval. For example, in the accompanying diagram, the entries for remote call B in the second collection interval (at 60 seconds) would show Total Elapsed as 33 seconds and Elapsed Time as 18 seconds (the values would actually be shown in microseconds instead of seconds). Figure 5. Remote calls diagram. Operation – This includes the remote call name and, in parenthesis, the client task being performed. The client task is only shown if applicable. For some entries, the client task will also show the percent completed. For example, for a find operation, the value might say “Query (Find)” if the operation completed before the log data was gathered at the end of the collection interval. But if the operation was still in progress, the value might say “Query (Finding 10%)”. List of all possible remote call names and client tasks: Remote Calls Client Tasks Adjust Reference Count Build Index Commit Records Compare Modification Counts Create Record Download Download File Download List Download Temporary Download Thumbnail Download With Lock Get Container URL Get DSN List Get File List Get File Size Get Guest Count Get Host Timestamp Lock Lock Finished Login Logout Notify Notify Conflicts ODBC Command ODBC Connect ODBC Query Open Perform Script On Server Query Remove All Locks Request Notification Serialize Transfer Container Unlock Update Table Upgrade Lock Upload Upload Binary Data Upload List Upload With Lock Verify Container Abort Aggregate Build Dependencies Commit Compress File Compute Statistics Consistency Check Copy File Copy Record Count Delete Record Set Delete Records Disk Cache Write Disk Full Disk I/O Export Records Find Find Remote Index Lock Conflict Optimize File Perform Script On Server Process Record List Purge Temporary Files Remove Free Blocks Replace Records Search Skip Index Sort Update Schema URL Data Transfer Verify Target – This shows the solution element that is being targeted by the remote call operation. See the accompanying tables (below) for some sample values as well as a list of all possible target values. The name of the hosted database file is always shown as the first value; i.e. before the first double colon. The additional information after the first value will be included if it is available. In the example shown, we can see that there is a lock on one or more records in the table whose ID is 138. The ID value is not the internal table ID; it is the BaseTable ID which comes from the XML Database Design Report (DDR). Using a table’s ID instead of its name is done for security reasons. If your table name is “Payroll”, and that name was exposed in the log file, that would leak potentially useful information about your database to a would-be hacker. Sample values for Operation and Target: Operation Target Unlock MyFile Commit Records (Commit) MyFile::table(138) Query (Find) MyFile::table(138)::field definitions(1) Lock MyFile::table(138)::records List of all possible targets: Target base directory containers custom menu custom menu set field definitions field index file reference file status font global function globals layout library master record list records relationship script table table occurrence theme value list Network Bytes In/Out – These two columns show the number of bytes received from and sent to the client. Each entry shows a value that is pertinent to its remote call and for its corresponding collection interval only. Note that if a remote call spans more than one collection interval, it will likely send or receive additional bytes in the subsequent interval(s); i.e. the values will be different in the different collection intervals. Sample value: 57253. Elapsed Time – The Elapsed Time statistic column is shown following the Network Bytes Out column, but we already covered it a bit earlier in the blog post, so please refer to the detailed explanation there. Wait Time – Number of microseconds that a remote call spent waiting in the collection interval. An example of why this might happen is that maybe there weren’t any processor cores available at the time or maybe some other remote call had locked the table which this remote call needed access to. Sample value: 1871. I/O Time – Number of microseconds that a remote call spent in the collection interval reading from and writing to disk. Sample value: 1871. Client Name – A name or identifier of a client, along with an IP address. If the client is a WebDirect client, it will be made apparent here. If the client is a server side script, the script name will be shown. Sample client name values: John Smith (Smith Work Mac) [] Archive Old Records – Admin 1 (FileMaker Script) How to use the top call stats log? The top call stats log will give you a better shot at identifying the factors contributing to slow performance. For example, if you have a single table that everyone is writing to or searching against, then you would expect to see a lot of remote calls having to do with managing the locking of that table or the index. Another example: If you receive reports of FileMaker being slow for everyone, and if you spot a single client appearing in the top call stats log much more so than other clients, then you can investigate with that user to see what he or she is doing that is different from other users. Jon Thatcher did an excellent session at the 2016 DevCon during which he gave several examples of using Top Call Stats to troubleshoot performance issues (starting at around 34:37). A recording of the session is available here: “Under the Hood: Server Performance”. Here is Jon’s general overview of how to use the three statistics logs to identify causes of performance issues: First identify the problem resource (CPU, RAM, disk, or network) using the Server Statistics log or an OS tool like (OS X) Activity Monitor or (Windows) Task Manager or PerfMon. The server statistics log can show spikes (for example, long elapsed time), but not which client caused them. Next, identify the problem client(s), if any, with Client Statistics. This log can show which client caused the spike, but not which operation caused it. Finally, use Top Call Statistics to identify the problem operation(s). References Tracking activity in log files in FileMaker Server – FileMaker Knowledge Base Viewing Statistics in FileMaker Server – FileMaker Knowledge Base Top Call statistics logging – FileMaker Knowledge Base Top Call Stats Logging video – Soliant TV “Under the Hood: Server Performance” – John Thatcher’s 2016 DevCon session Session slides for John Thatcher’s session – FileMaker Community The post FileMaker Server Top Call Statistics Logging appeared first on Soliant Consulting. Afficher la totalité du billet
  10. Nowadays, consumers have more information available to them online, resulting in new buying behavior that changed the sales process in recent years. These changes are influencing marketing and sales teams to combine tools and work together to deliver an effective sales experience. Tools like Salesforce and Pardot are embracing new buying behaviors and helping marketing and sales teams to sell smarter. Together, these tools are innovating and leveraging how businesses engage with customers in a cohesive, personalized selling process that meets consumers’ current needs. One of the most exciting ways to leverage your Salesforce and Pardot tool is exploring the Lightning Experience. The Engagement History Lightning component is a custom component that displays Pardot prospect activities in Salesforce, providing sales representatives data of their prospect interactions and the ability to respond to these actions fast but in a personable way. Here are some highlights: Explore Prospect Activity History — The prospect’s browsing history; how many website visits, which pages were viewed, and which content was downloaded. All the valuable information that helps you better understand prospect needs. Simpler Interface — Replacing the Visualforce page with the Engagement History Lightning component will give you a simpler experience and provide information in a unique way to have a more customized view and personalized conversation with your prospects. Automatic Notifications — Sales reps are automatically notified when a prospect shows interest in their product. The notification enables them to manage leads with relevant content, and to act more swiftly and directly within Salesforce. This Engagement History Lightning component is supported in the Lightning App Builder (and on any other app that allows the addition of custom Lightning components) conveniently making all the information available on the go. You can set up this Lightning Experience enhancement by editing a record page or creating a new page from the Lightning App Builder. It is important to note that My Domain must be enabled in your Salesforce org to add the Engagement History component onto lead or contact pages, and configuring permissions might be needed. If you need step-by-step instructions on how to add components to Lightning Experience, read the Salesforce “Configure Lightning Experience Record Pages” article for more information. Take full advantage of your tools to improve your customer’s experience and enable your sales team. With just a few clicks, you can add the Lightning component to your company’s records page, and help your sales reps fully understand the buying behaviors of your customer base and work smarter. Sources Configure Lightning Experience Record Pages – Salesforce Help Engagement History for Lightning Reference – Salesforce Knowledge Base The post Lightning Experience for Pardot appeared first on Soliant Consulting. Afficher la totalité du billet
  11. Once Thanksgiving is over, it seems like the last month of the year kicks into high gear. At Soliant, each of our offices holds a holiday dinner where everyone gets together for good food, conversation, and gift exchange. Holiday Cheer in California The California team started with a dinner at West Park Bistro in San Carlos, CA. On the night of our dinner, nearby streets were closed off in preparation for the “Night of Holiday Lights” lighting festivities scheduled to commence that evening. What normally is an easy parking situation, turned into a “Where’s Waldo” version for parking spaces. By the time everyone arrived at the restaurant, we were all ready for the meal to start, post haste! Our private room was also where all the wine is kept. We were disciplined and did not grab any of the wine from the racks When it came time for our White Elephant gift exchange, there were a couple of sought after gifts that reached the limit on times stolen. We had a boisterous and fun close to our dinner. Click to view slideshow. Holiday Dinner in Pennsylvania Click to view slideshow. The next holiday dinner was at L’angolo Restaurant in south Philadelphia, where our Pennsylvania team gathered for a delicious Italian meal. When I spoke with Managing Director, Craig Stabler, about their party, he said they did a Yankee Swap. I was curious if it was the same thing as a White Elephant exchange and found out that it is — it goes by different names, such as Yankee Swap, Dirty Santa, and so on. Everyone enjoyed the delicious food and rather than stealing gifts when it came time for their Yankee Swap, they all opened them at the same time. No one had to try hiding their gift under a chair to prevent it from getting stolen. Holiday Celebration in Chicago Our final holiday dinner was held at Formento’s, which is two blocks away from our Chicago headquarters where everyone enjoyed scrumptious Italian food. I’m sure with the extremely cold temperatures, that short walk from the office was much appreciated. The Chicago team does a White Elephant gift exchange, but with an added twist. Six years ago, someone did a “re-gift” by bringing one of our gray, button down Soliant shirts as their gift. The next year, the person that ended up with the shirt brought it back, but with embellishments on the epaulettes. Thus, a tradition was born. Whoever ends up with the Soliant shirt at the end of the gift exchange must bring it back to the next year’s holiday dinner with a new embellishment. Click to view slideshow. Previous embellishments have included, fancy epaulettes, color piping, a light, silhouette patches, fleur de lis, and a hat. This year, Dawn Heady brought the shirt back with even more lights, including a light up tie! The Chicago folks have brought their gift exchange game up to another level. As we close the year, and begin our holiday break, I am so thankful for the fantastically talented, smart, and witty people that I get to work and interact with every day. Happy holidays, everyone! The post Happy Holidays at Soliant appeared first on Soliant Consulting. Afficher la totalité du billet
  12. FileMaker Cloud running in Amazon Web Services (AWS) delivers tremendous value and cost savings over owning and operating a traditional on-premise server. However, there are still costs involved and it is a good idea to be mindful of those costs. Indeed, tracking costs is part of a well architected application. This also applies to the standard version of FileMaker Server running on an AWS EC2 instance, so lessons learned here will also be applicable in the greater context across all AWS services. I especially recommend trying out FileMaker Cloud and AWS Services in general, which have free trial and free tier services, respectively. Just remember, the free trial has a limit, so either continue will annual licensing for FileMaker or stay within the threshold when evaluating the services you will need. Minding the Till CloudWatch is an AWS service that offers the ability to, among other things, set Billing Alarms that let you know when you have exceeded spending thresholds. In the age of virtual servers where everything is scriptable, it makes good sense to take advantage of this feature to avoid unexpected charges when you get a monthly bill. It is also easy to set up, so why not? Step 1 – Enable Billing Alerts First, you will need to do this from the “root” account, which is the account you first created when setting up your AWS account. If you only use one account, then your account is the root account. Log in to the AWS console and open the Billing and Cost Management dashboard. Select “Preferences” from the left-hand navigation (see Figure 1). Check the box next to “Receive Billing Alerts” to enable the service. Click “Save preferences” to save changes. Figure 1. Check the “Receive Billing Alerts” box (click image to enlarge). Step 2 – Create an Alarm Once you have enabled billing alerts, you can create a billing alert in CloudWatch. Open the CloudWatch console by opening the Services menu and selecting CloudWatch from the Management Tools section. Make sure you are in the US East region. This is the region that billing data is stored in, regardless of what worldwide region you have services running in. Choose “Alarms”. Click on “Create Alarm”. Then click on “Billing Metrics” to select that category (see Figure 2). Check the box on the line with “USD” under the Total Estimated Charge section. Click “Next” to continue. Give the alarm a name, like “Billing” (see Figure 3), and set the threshold you would like to be notified at. For example, whenever charges exceed $100 a month. Figure 2. Create an Alarm (click image to enlarge). Figure 3. Set the Alarm Threshold (click image to enlarge). Step 3 – Specify Alert Recipients Next we need to set up a distribution list of those who will get notified in the Actions section of this dialog (see Figure 4). Click on “New list” next to the “Send notification to” drop down list. Then you can add email addresses to the “Email list”. Separate multiple email addresses with commas. Make sure to give your notification list a name. Click on “Create Alert” to finish. Figure 4. Define the Alert actions (click image to enlarge). The recipients will receive an email to validate their email addresses. Once confirmed, the recipients will start receiving alerts. AWS Simple Notification Service You may not have been aware of this, but you created an SNS Topic in the preceding steps. Simple Notification Service (SNS) is another very useful AWS service used to send various kinds of notifications. In this case, the notification is in the form of emails, but it could also include HTTP endpoints or text messages. If you are interested to see details about the Topic you created, you can navigate to the SNS dashboard by opening the Services menu and selecting SNS from the Messages section. From there click on Topics to see the distribution list we created above. Click on the link for the ARN (Amazon Resource Name) to view the list of subscriptions to this topic. You will see the email addresses you entered above and their subscription status. If you ever need to update the billing alert recipient list, you can do so here in the SNS Topic. Cost Optimization Cost optimization is one pillar of a well architected framework and an essential part of a deployment strategy. Billing alerts can help with this objective. They are easy to set up and configure, so I would recommend utilizing this service to aid in a successful FileMaker Cloud (or FileMaker Server) AWS deployment. Be sure to read these other AWS related posts to learn more: FileMaker Server on Amazon Web Services Quick Start Guide FileMaker Hosting Info and More Fun with Amazon Web Services Backups in the Cloud with AWS The post Billing Alerts for FileMaker Cloud appeared first on Soliant Consulting. Afficher la totalité du billet
  13. Moving from Vagrant to Docker can be a daunting idea. I’ve personally been putting it off for a long time, but since I discovered that Docker had released a “native” OS X client I decided it was finally time to give it a go. I’ve been using Vagrant for years to spin up a unique development environment for each of the client projects that I work on and it works very well, but does have some shortcomings that I was hoping that Docker would alleviate. I’ll tell you now, the transition to Docker was not as difficult as I had built it up to be in my mind. Let’s start off with the basics of Docker and how it differs from Vagrant. Docker is a container based solution, where you build individual containers for each of the services you require for your application. What does this mean practically? Well, if you’re familiar with Vagrant you will know that Vagrant helps you create one large monolithic VM and installs and configures (through configuration management tools like Puppet or Chef) everything that your Application needs. This means that for each project, you have a full stack VM running which is very resource intensive. Docker on the other hand can run only the services you need by utilizing containers. Docker Containers So what are Docker containers? Well, if we’re developing a PHP application, there’s a few things that we will need. We need an application server to run PHP, a web server (like Apache or nginx) to serve our code, and a database server to run our MySQL instance. In Vagrant, I would have built an Ubuntu VM and had Puppet install and configure these services on that machine. Docker allows you to separate those services and run each service in its own container which is much more lightweight than a full VM. Docker then provides networking between those containers to allow them to talk to each other. NOTE: In my example below I’m going to combine the PHP service and the Apache service into one container for simplicity and since logically there isn’t a compelling reason to separate them. One Host to Rule Them All At first running multiple containers seems like it would be MORE resource intensive than Vagrant, which only runs a single VM. In my example, I’m now running multiple containers where I only had to run a single Vagrant VM… how is this a better solution? Well, the way that Docker implements its containers makes it much more efficient than an entire VM. Docker at its heart runs on a single, very slimmed down host machine (on OS X). For the purpose of this article, you can think of Docker as a VM running on your machine and each container that you instantiate runs on the VM and gets its own sandbox to access necessary resources and separate it from other processes. This is a very simplistic explanation of how Docker works and if you’re interested in a more in-depth explanation, Docker provides a fairly thorough overview here: Docker Images Now that we know what Docker containers are, we need to understand how they’re created. As you may have guessed from the header above, you create containers using Docker images. A Docker image is defined in a file called a ‘Dockerfile’, which is very similar in function to a Vagrantfile. The Dockerfile simply defines what your Image should do. Similar to how an Object is an instance of a Class, a Docker Container is an instance of a Docker Image. Like an object, Docker Images are also extensible and re-usable. So a single MySQL image can be used to spin up database service containers on 5 different projects. You can create your own Docker Images from scratch or you can use and extend any of the thousands of images available at Image Extensibility As I noted above, Docker Images are extensible, meaning that you can use an existing image and add your own customizations on top of it. In the example below, I found an image on the Docker Hub ‘eboraas/apache-php’ that was very close to what I needed with just a couple tweaks. One of the big advantages of Docker is that you are able to pull an image and extend it to make your own customizations. This means that if the base image changes, you will automatically get those changes next time you run your docker image without further action on your part. Docker Compose When you install Docker on OS X, you’ll get a tool called Docker Compose. Docker compose is a tool for defining and running applications with multiple Docker containers. So instead of having to individually start all of your containers on the command line each with their own parameters, it allows you to define those instructions in a YAML file and run one command to bring up all the containers. Docker Compose is also what will allow your Docker containers to talk to each other. After all, once your web server container is up and running it will need to talk to your database server which lives in its own container. Docker Compose will create a network for all your containers to join so that they have a way to communicate with each other. You will see an example of this in our setup below. Docker Development Application Setup All of this Docker stuff sounds pretty cool, right? So let’s see a practical example of setting up a typical PHP development environment using Docker. This is a real world example of how I set up my local dev environment for a new client with an existing code base. Install Docker The first thing you’re going to want to do is install Docker. I’m not going to walk through all the steps here as Docker provides a perfectly good guide. Just follow along the steps here: Now that you’ve got Docker installed and running, we can go ahead and open up a terminal and start creating the containers that we’ll need! MySQL Container Now, typically I would start with my web server and once that is up and running I would worry about my database. In this case the database is going to be simpler (since I’ll need to do some tweaking on the web server image) so we’ll start with the easier one and work our way up. I’m going to use the official MySQL image from the Docker Hub: You can get this image by running: docker pull mysql After pulling the mysql image you should be able to type ‘docker images’ and it will show up in the list: Docker Images Now we pulled the image to our local machine and we can then run it with this command: docker run mysql This will create a container from the image with no configuration options at all, just a vanilla MySQL server instance. This is not super useful for us, so let’s go ahead and `Ctrl + C` to stop that container and we’ll take it one step further with this command: docker run -p 3306:3306 --name my-mysql -e MYSQL_ROOT_PASSWORD=1234 -d mysql:5.6 We’re now passing in a handful of optional parameters to our run command which do the following: `-p 3306:3306` – This option is for port forwarding. We’re telling the container to forward its port 3306 to port 3306 on our local machine (so that we can access mysql locally). `–name my-mysql` – This is telling Docker what to name the container. If you do not provide this, Docker will just assign a randomly generated name which can be hard to remember/type (like when I first did this and it named my container `determined_ptolemy`) `-e MYSQL_ROOT_PASSWORD=1234` – Here we are setting an Environment variable, in this case that the root password for the MySQL server should be ‘1234’. `-d` – This option tells Docker to background the container, so that it doesn’t sit in the foreground of your terminal window. `mysql:5.6` – This is the image that we want to use, with a specified tag. In this case I want version 5.6 so I specified it here. If no tag is specified it will just use latest. After you’ve run this command, you can run ‘docker ps’ and it will show you the list of your running containers (if you do ‘docker ps –a’ instead, it will show all containers – not just running ones). This is kind of a clunky command to have to remember and type every time you want to bring up your MySQL instance. In addition, bringing up the container in this way forwards the 3306 port to your local machine, but doesn’t give it an interface to interact with other containers. But no need to worry this is where Docker Compose is going to come in handy. For now, let’s just stop and remove our container and we’ll use it again later with docker-compose. The following commands will stop and remove the container you just created (but the image will not be deleted): docker stop my-mysql docker rm my-mysql NOTE: Explicitly pulling the image is not required, you can simply do `docker run mysql` and it will pull the image and then run it, but we’re explicitly pulling just for the purpose of demonstration. Apache/PHP Container I’ve searched on and found a suitable Apache image that also happens to include PHP: Two birds with one stone, great! Now, this image is very close to what I need but there’s a couple of things missing that my application requires. First of all, I need the php5-mcrypt extension installed. This application also has an ‘.htaccess’ file that does URL rewriting, so I need to set ‘AllowOverride All’ in the Apache config. So, I’m going to create my own image that extends the ‘eboraas/apache-php’ image and makes those couple changes. To create your own image, you’ll need to first create a Dockerfile. In the root of your project go ahead and create a file named ‘Dockerfile’ and insert this content: FROM eboraas/apache-php COPY docker-config/allowoverride.conf /etc/apache2/conf.d/ RUN apt-get update && apt-get -y install php5-mcrypt && apt-get clean && rm -rf /var/lib/apt/lists/* Let’s go through this line-by-line: We use ‘FROM’ to denote what image we are extending. Docker will use this as the base image and add on our other commands Tells Docker to ‘COPY’ the ‘docker-config/allowoverride.conf’ file from my local machine to ‘/etc/apache2/conf.d’ in the container Uses ‘RUN’ to run a command in the container that updates apt and installs php5-mcrypt and then cleans up after itself. Before this will work, we need to actually create the file we referred to in line 2 of the Dockerfile. So create a folder named ‘docker-config’ and a file inside of that folder called ‘allowoverride.conf’ with this content: <Directory "/var/www/html"> AllowOverride All </Directory> The following commands do not need to be executed for this tutorial, they are just for example! If you do run them, just be sure to stop the container and remove it before moving on. At this point, we could build and run our customized image: docker build -t nick/apache-php56 . This will build the image described in our Dockerfile and name it ‘nick/apache-php56′. We could then run our custom image with: docker run -p 8080:80 -p 8443:443 -v /my/project/dir/:/var/www/html/ -d nick/apache-php56 The only new tag in this is: `-v /my/project/dir/:/var/www/html/` – This is to sync a volume to the container. This will sync the /my/project/dir on the local machine to /var/www/html on the container. Docker Compose Instead of doing the complicated ‘docker run […]’ commands manually, we’re going to go ahead and automate the process so that we can bring up all of our application containers with one simple command! The command that I’m referring to is ‘docker-compose’, and it gives you a way to take all of those parameters that we tacked onto the ‘docker run’ command and put them into a YAML configuration file. Let’s dive in. Create a file called ‘docker-compose.yml’ (on the same level as your Dockerfile) and insert this content: version: '2' services: web: build: . container_name: my-web ports: - "8080:80" - "8443:443" volumes: - .:/var/www/html links: - mysql mysql: image: mysql:5.6 container_name: my-mysql ports: - "3306:3306" environment: MYSQL_ROOT_PASSWORD: 1234 This YAML config defines two different containers and all of the parameters that we want when they’re run. The ‘web’ container tells it to ‘build: .’, which will cause it to look for our Dockerfile and then build the custom image that we made earlier. Then when it creates the container it will forward our ports for us and link our local directory to ‘/var/www/html’ on the container. The ‘mysql’ container doesn’t get built, it just refers to the image that we pulled earlier from the Docker Hub, but it still sets all of the parameters for us. Once this file is created you can bring up all your containers with: docker-compose up -d Using Your Environment If you’ve followed along, you should be able to run `docker ps` and see both of your containers up and running. Since we forwarded port 80 on our web container to port 8080 locally, we can visit ‘http://localhost:8080’ in our browser and be served the index.php file that is located in the same directory as the docker-compose.yml file. I can also connect to the MySQL server from my local machine (since we forwarded port 3306) by using my local MySQL client: mysql –h –u root –p1234 NOTE: You have to use the loopback address instead of localhost to avoid socket errors. But how do we configure our web application to talk to the MySQL server? This is one of the beautiful things about docker-compose. In the ‘docker-compose.yml’ file you can see that we defined two things under services: mysql and web. By default, docker-compose will create a single network for the app defined in your YAML file. Each container defined under a service joins the default network and is reachable and discoverable by other containers on the network. So when we defined ‘mysql’ and ‘web’ as services, docker-compose created the containers and had them join the same network under the hostnames ‘mysql’ and ‘web’. So in my web application’s config file where I define the database connection parameters, I can do the following: define('DB_DRIVER', 'mysqli'); define('DB_HOSTNAME', 'mysql'); define('DB_USERNAME', 'root'); define('DB_PASSWORD', '1234'); define('DB_DATABASE', 'dbname'); As you can see, all I have to put for my hostname is ‘mysql’ since that is what the database container is named on the Docker network that both containers are connected to. Conclusions Now I’ll circle back to my original comparison of Vagrant to Docker. In my experience so far with Docker, I believe it to be better than Vagrant in almost every aspect I can think of. Docker uses less resources: compared to running a full stack VM, these containers are so lightweight that I can actually feel the performance difference on my laptop. Docker is faster to spin up environments: doing a ‘vagrant up –provision‘ for the first time would often take in excess of 15 minutes to complete whereas the ‘docker-compose up -d‘ that we just ran took a matter of seconds. Docker is easier to configure: what would have taken me a long time to write ruby scripts (or generate them with Puphpet) for Vagrant took no time at all to extend a Docker image and add on a few simple commands. Hopefully this article was helpful for you in exploring what Docker has to offer. Docker also has extensive and detailed documentation available online at: If you still don’t feel ready to dive right in, it may be helpful to run through the “Get Started with Docker” tutorial that Docker provides: The post A PHP Developer’s Transition from Vagrant to Docker appeared first on Soliant Consulting. Afficher la totalité du billet
  14. While small and incremental deployments of features to a Salesforce production org are best practice, there are times when multiple, or large areas of functionality must be released simultaneously. Accordingly, a large-scale Salesforce deployment can invoke a high degree of ambivalence among the team involved in its preparation. On one hand, there should be a great degree of excitement. Chances are the functionality you are preparing to implement will alleviate pain points in your existing org, or perhaps greatly simplify existing workflows. On the contrary, it’s also quite normal to feel a degree of anxiety. Large-scale Salesforce deployments merit significant planning and attention in order to ensure a successful rollout. In those situations, proper steps should be taken to minimize disruption of the production environment and operation. With these practices in place, you can help to ensure that any deployment is truly successful. Large-scale Salesforce deployments merit significant planning and attention in order to ensure a successful rollout. Thorough end-to-end testing Often times, end-to-end testing may be neglected in favor of unit tests, to focus on specific details. Ensuring that proper end-to-tend testing of the features entailed in your deployment has been conducted should make you feel much more comfortable about user experience post-deployment. For added benefit and confidence, this testing should be completed by the impacted user groups. Testing corner and edge cases Salesforce deployments require that code being deployed to production have tests that provide 75% code coverage. While this is often covered by granular, code-based unit tests, applications that are business-oriented should also be subjected to efficient, yet thorough end-to-end testing. This is to ensure that the integrity of complex business processes is maintained. These tests are typically conducted by users, rather than code, which allows for the detection of potential user-experience issues. Accordingly, upon ensuring that proper end-to-end testing is conducted, you should feel much more comfortable about user experience post-deployment. Testing in a full sandbox The Salesforce platform’s multi-tenant architecture means that there are significant limits that must be accounted for when developing custom applications. Some of these limits can only be tested with large amounts of data. As such, it is invaluable to ensure that your user acceptance testing is conducted in a full sandbox environment. This is particularly important, as it is the only environment which supports performance and load testing. Moreover, it allows your testing environment to be a complete replica of your production org – encompassing all data (including metadata), object records and apps. While the cost of a full sandbox may make your team hesitant, it is entirely justified with the invaluable test coverage provided. This in turn greatly lowers the risk of post-deployment issues, and accordingly results in saving the time and costs associated with encountering such issues. Making a copy of the existing production environment, if applicable Version control tools, such as Git and Subversion, are an excellent way of capturing the state of an org’s codebase through each release. If you do not have a version control system in place, having a backup copy of the existing production environment, through a sandbox refresh prior to deployment, allows for the capability of swiftly rolling back to the previous system in the event of a critical deployment issue. Additionally, you should be sure to schedule weekly Organization Data Exports to ensure that all of your Org data is backed up on a consistent basis. Ensure resources are on standby for resolving issues that arise While you certainly want to feel confident that your deployment will go off without a hitch, it’s invaluable to have resources readily available to handle any issues that are reported. To take things a step further, it is even more beneficial to proactively discuss a triaging plan with your team – such that you know precisely who would handle different types of issues. Establishing formal go/no go plan prior to the release, and setting a firm timeframe for making that decision When initially completing a deployment plan, one of the most crucial dates to set is when to make a formal “go/no go” decision with the team. This should be assessed in a meeting that includes all parties involved in the deployment. Prior to this meeting, it’s imperative to outline all facets that should be taken into consideration, separating the truly critical components from areas that can be refined beyond the designated “go/no go” date, or potentially after deployment. There are also a few additional steps that are important to consider. You’ll want to develop some comprehensive communication to be distributed to the user base, detailing the new functionality. It’s also greatly beneficial to offer any training that may be necessary. Finally, you’ll of course want both your development team and business stakeholders to verify the changes in production upon deployment. It is inevitable to feel some of the inherent anxiety that comes along with a large-scale deployment. However, upon following the practices outlined above, your team should feel truly confident that you have comprehensively covered all areas and are headed towards another successful release. The post Preparing for a Large-scale Salesforce Deployment appeared first on Soliant Consulting. Afficher la totalité du billet
  15. The list data type is rather versatile, and its use is essential in many programmatic solutions on the Salesforce platform. However, there are some scenarios when lists alone do not provide the most elegant solution. One example is routing the assignment of accounts based on each user’s current capacity. Suppose we want to assign the oldest unassigned account to a user at the moment when a new account is entered into Salesforce. When working with these accounts, we might want to order them by received date, with the first entry containing the oldest date. How can we design a routing tool so that the next account to assign is stored at the front of a list? Figure 1. An account queue, ordered by received date (click image to enlarge). Queue Abstract Data Type The queue abstract data type is well suited for this type of problem. Our first step should be to define the Queue interface and what methods we want to include: public interface Queue{ //returns the number of entries in the queue Integer size(); //returns true if there are no entries in the queue boolean isEmpty(); //places the record at the end of the queue void enqueue(SObject o); //returns the entry at the front of the queue but does not remove it SObject first(); //returns and removes the entry at the front of the queue SObject dequeue(); } Next we need to implement this interface for accounts: public class AccountQueue implements Queue{ private List<Account> accounts; //default constructor public AccountQueue(){ this.accounts = new List<Account>(); } //returns the number of accounts in the queue public Integer size(){ return accounts.size(); } //returns true if there are no accounts in the queue public boolean isEmpty(){ return accounts.isEmpty(); } //places the account at the end of the queue public void enqueue(SObject o){ Account newAccount = (Account) o; accounts.add(newAccount); } //returns the account at the front of the queue public Account first(){ if(isEmpty()){ return null; } return accounts.get(0); } //returns and removes the account at the front of the queue public Account dequeue(){ if(isEmpty()){ return null; } Account firstAccount = accounts.get(0); accounts.remove(0); return firstAccount; } } On the Account object, we should create two custom fields. The first is called Assigned, which is a lookup to the User object. The second is the Received Date which is a date field. On the User object, we can create a number field called Capacity, which will tell us to how many more applications a User can be assigned. Once this number reaches zero, we should not assign any more applications to that particular user. In order for this process to occur when an account is inserted, we will need an Account trigger: trigger Account on Account (before insert) { if(trigger.isBefore && trigger.isInsert){ new AccountTriggerHandler().beforeInsert(; } } Here is the trigger handler: public class AccountTriggerHandler { //list of accounts that are available for assignment to a User private List<Account> unassignedAccounts { get{ if(unassignedAccounts == null){ unassignedAccounts = [SELECT ID, Name, Received_Date__c FROM Account WHERE Received_Date__c != null AND Assigned__c = null ORDER BY Received_Date__c]; } return unassignedAccounts; } private set; } //Account queue where the account at the front of the queue has the oldest received date private AccountQueue unassignedAccountQueue { get{ if(unassignedAccountQueue == null){ unassignedAccountQueue = new AccountQueue(); for(Account a : unassignedAccounts){ unassignedAccountQueue.enqueue(a); } } return unassignedAccountQueue; } private set; } //Map of users that are able to receive assigned applications private Map<ID,User> userMap { get{ if(userMap == null){ userMap = new Map<ID,User>([SELECT ID, Capacity__c FROM User WHERE Capacity__c != null]); } return userMap; } private set; } public void beforeInsert(List<Account> accountList){ //obtain the number of accounts in the trigger Integer numberOfAccountsToAssign = accountList.size(); //hold a list of accounts that will be assigned to users List<Account> accountsToAssign = new List<Account>(); for(Integer i = 0; i < numberOfAccountsToAssign; i++){ //obtain the id of the next user that can receive an application ID userIDNextToAssign = getNextAssignedUser(); //reduce that user's capacity by 1 reduceCapacity(userIDNextToAssign); //determine the next account that is to be assigned Account unassignedAccount = unassignedAccountQueue.dequeue(); //if there were any accounts remaining in the queue, assign that account if(unassignedAccount != null){ unassignedAccount.Assigned__c = userIDNextToAssign; accountsToAssign.add(unassignedAccount); } } //update unassigned accounts update accountsToAssign; //update the user records update userMap.values(); } //return the id of the user that will be assigned to the next available account private ID getNextAssignedUser(){ ID largestCapacityUserID; //find the user id of the largest capacity user Integer maxCapacity = 0; for(ID userID : userMap.keySet()){ Integer userCapacity = (Integer) userMap.get(userID).Capacity__c; if(maxCapacity < userCapacity && userCapacity > 0){ maxCapacity = userCapacity; largestCapacityUserID = userID; } } return largestCapacityUserID; } //recude the capacity of the user with id userID by 1 private Map<ID,User> reduceCapacity(ID userID){ //decrease the capacity of that user by 1 if(userID != null){ User usr = userMap.get(userID); usr.Capacity__c = usr.Capacity__c - 1; } return userMap; } } The trigger fires when a new account is entered into Salesforce, then searches for the account with the oldest received date, and assigns it to the user with the highest capacity. To demonstrate, we can set one user to have a capacity of 1, and a second user to have a capacity of 2. If we insert three accounts, then the three accounts with the oldest received date will be distributed between both users. Here are the existing accounts before we insert the new accounts: Figure 2. State of existing accounts before the new ones are inserted (click image to enlarge). Here are the assignments after we add the new accounts: Figure 3. The older accounts have now been assigned after inserting the new ones (click image to enlarge). Since three accounts were inserted into Salesforce, we needed to assign three accounts. Mario had a capacity of 2, so he was assigned the first account in the queue. Next, Taylor, who had a capacity of 1, was assigned an account. He was then at a full capacity of 0. Mario, now at a capacity of 1, received the next account in the queue. There are some alternative ways to approach this problem using only lists, but using the Queue interface simplifies the implementation. One approach could have been to reverse the order of the unassigned accounts, starting with the newest account as the first item, and the oldest as the last. This might be unintuitive, and would require the developer to store or compute the size of the list in order to access the last element. Another approach might have been to keep the same order as the queue solution, but to simply remove the element from the front of the list using the remove(index) function. The queue implementation abstracts this process, and removes the requirement to continually to check if the list is empty, as the dequeue method already provides that functionality. The queue abstract data type is a natural fit for any first in first out business requirement. Queues can also be extended to other objects in Salesforce, rather than just the Account object. Queues and other abstract data types can provide templates for solutions to many programming challenges and Salesforce projects are no exception. The post Using the Queue Abstract Data Type to Assign Records to Users appeared first on Soliant Consulting. Afficher la totalité du billet